Skip to content

96 posts tagged with “tech industry”

Back in September, when Trump announced America by Design and appointed Joe Gebbia as Chief Design Officer, I wrote that it was “yet another illustration of this administration’s incompetence.” The executive order came months after DOGE gutted 18F and the US Digital Service, the agencies that had spent a decade building the expertise Gebbia now claims to be inventing.

Mark Wilson, writing for Fast Company, spoke to a dozen government designers about how Gebbia’s tenure has played out. When Wilson asked Gebbia about USDS and 18F—whether he thought these groups were overrated and needed to be rebuilt—here’s what he said:

“Without knowing too much about the groups you mentioned, I do know that the air cover and the urgency around design is in a place it’s [never] been before.”

He doesn’t know much about them. The agencies his administration destroyed. The hundreds of designers recruited from Google, Amazon, and Facebook who fixed healthcare.gov and built the COVID test ordering system. He doesn’t know much about them.

Mikey Dickerson, who founded USDS, on the opportunity Gebbia inherited:

“He’s inheriting the blank check kind of environment… [so] according to the laws of physics, he should be able to get a lot done. But if the things that he’s allowed to do, or the things that he wants to do, are harmful, then he’ll be able to do a lot of harm in a really short amount of time.”

And what has Gebbia done with that blank check? He’s built promotional websites for Trump initiatives: trumpaccounts.gov, trumpcard.gov, trumprx.com. Paula Scher of Pentagram looked at the work:

“The gold card’s embarrassing. The typeface is hackneyed.”

But Scher’s real critique goes beyond aesthetics.

“You can’t talk about people losing their Medicare and have a slick website,” says Paula Scher. “It just doesn’t go.”

That’s the contradiction at the center of America by Design. You can’t strip food stamps, gut healthcare subsidies, and purge the word “disability” from government sites, then turn around and promise to make government services “delightful.” The design isn’t the problem. The policy is.

Scher puts it plainly:

“[Trump] wants to make it look like a business. It’s not a business. The government is a place that creates laws and programs for society—it’s not selling shit.”

Wilson’s piece is long and worth reading in full. There’s more on what USDS and 18F actually accomplished, and on the designers who watched their work get demolished by people who didn’t understand it.

Man in a casual jacket and sneakers standing before a collage of large "AMERICA" and "DESIGN" text, US flag and architectural imagery.

From Airbnb to the White House: Joe Gebbia is reshaping the government in Trump’s image

The president decimated the U.S. government’s digital design agencies and replaced them with a personal propaganda czar.

fastcompany.com iconfastcompany.com

The optimistic case for designers in an AI-driven world is that design becomes strategy—defining what to build, not just how it looks. But are designers actually making that shift?

Noam Segal and Lenny Rachitsky, writing for Lenny’s Newsletter, share results from a survey of 1,750 tech workers. The headline is that AI is “overdelivering”—55% say it exceeded expectations, and most report saving at least half a day per week. But the findings by role tell a different story for designers:

Designers are seeing the fewest benefits. Only 45% report a positive ROI (compared with 78% of founders), and 31% report that AI has fallen below expectations, triple the rate among founders.

Meanwhile, founders are using AI to think—for decision support, product ideation, and strategy. They treat it as a thought partner, not a production tool. And product managers are building prototypes themselves:

Compare prototyping: PMs have it at #2 (19.8%), while designers have it at #4 (13.2%). AI is unlocking skills for PMs outside of their core work, whereas designers aren’t seeing the marginal improvement benefits from AI doing their core work.

The survey found that AI helps designers with work around design—research synthesis, copy, ideation—but visual design ranks #8 at just 3.3%. As Segal puts it:

AI is helping designers with everything around design, but pushing pixels remains stubbornly human.

This is the gap. The strategic future is available, but designers aren’t capturing it at the same rate as other roles. The question is why—and what to do about it.

Checked clipboard showing items like Speed, Quality and Research, next to headline "How AI is impacting productivity for tech workers

AI tools are overdelivering: results from our large-scale AI productivity survey

What exactly AI is doing for people, which AI tools have product-market fit, where the biggest opportunities remain, and what it all means

lennysnewsletter.com iconlennysnewsletter.com

The rise of micro apps describes what’s happening from the bottom up—regular people building their own tools instead of buying software. But there’s a top-down story too: the structural obsolescence of traditional software companies.

Doug O’Laughlin makes the case using a hardware analogy—the memory hierarchy. AI agents are fast, ephemeral memory (like DRAM), while traditional software companies need to become persistent storage (like NAND, or ROM if you’re old school like me). The implication:

Human-oriented consumption software will likely become obsolete. All horizontal software companies oriented at human-based consumption are obsolete.

That’s a bold claim. O’Laughlin goes further:

Faster workflows, better UIs, and smoother integrations will all become worthless, while persistent information, a la an API, will become extremely valuable.

As a designer, this is where I start paying close attention. The argument is that if AI agents become the primary consumers of software—not humans—then the entire discipline of UI design is in question. O’Laughlin names names:

Figma could be significantly disrupted if UIs, as a concept humans create for other humans, were to disappear.

I’m not ready to declare UIs dead. People still want direct manipulation, visual feedback, and the ability to see what they’re doing. But the shift O’Laughlin describes is real: software’s value is migrating from presentation to data. The interface becomes ephemeral—generated on the fly, tailored to the task—while the source of truth persists.

This is what I was getting at in my HyperCard essay: the tools we build tomorrow won’t look like the apps we buy today. They’ll be temporary, personal, and assembled by AI from underlying APIs and data. The SaaS companies that survive will be the ones who make their data accessible to agents, not the ones with the prettiest dashboards.

Memory hierarchy pyramid: CPU registers and cache (L1–L3) top; RAM; SSD flash; file-based virtual memory bottom; speed/cost/capacity notes.

The Death of Software 2.0 (A Better Analogy!)

The age of PDF is over. The time of markdown has begun. Why Memory Hierarchies are the best analogy for how software must change. And why Software it’s unlikely to command the most value.

fabricatedknowledge.com iconfabricatedknowledge.com

Almost a year ago, I linked to Lee Robinson’s essay “Personal Software” and later explored why we need a HyperCard for the AI era. The thesis: people would stop searching the App Store and start building what they need. Disposable tools for personal problems.

That future is arriving. Dominic-Madori Davis, writing for TechCrunch, documents the trend:

It is a new era of app creation that is sometimes called micro apps, personal apps, or fleeting apps because they are intended to be used only by the creator (or the creator plus a select few other people) and only for as long as the creator wants to keep the app. They are not intended for wide distribution or sale.

What I find compelling here is the word “fleeting.” We’ve been conditioned to think of software as permanent infrastructure—something you buy, maintain, and eventually migrate away from. But these micro apps are disposable by design. One founder built a gaming app for his family to play over the holidays, then shut it down when vacation ended. That’s not a failed product. That’s software that did exactly what it needed to do.

Howard University professor Legand L. Burge III frames it well:

It’s similar to how trends on social media appear and then fade away. But now, [it’s] software itself.

The examples in the piece range from practical (an allergy tracker, a parking ticket auto-payer) to whimsical (a “vice tracker” for monitoring weekend hookah consumption). But the one that stuck with me was the software engineer who built his friend a heart palpitation logger so she could show her doctor her symptoms. That’s software as a favor. Software as care.

Christina Melas-Kyriazi from Bain Capital Ventures offers what I think is the most useful framing:

It’s really going to fill the gap between the spreadsheet and a full-fledged product.

This is exactly right. For years, spreadsheets have been the place where non-developers build their own tools—janky, functional, held together with VLOOKUP formulas and conditional formatting. Micro apps are the evolution of that impulse, but with real interfaces and actual logic.

The quality concerns are real—bugs, security flaws, apps that only their creator can debug. But for personal tools that handle personal problems, “good enough for one” is genuinely good enough.

Woman with white angel wings holding a glowing wand, wearing white dress and boots, hovering above a glowing smartphone.

The rise of ‘micro’ apps: non-developers are writing apps instead of buying them

A new era of app creation is here. It’s fun, it’s fast, and it’s fleeting.

techcrunch.com icontechcrunch.com

Claude Code is having a moment. Anthropic’s agentic coding tool has gone viral over the past few weeks, with engineers and non-engineers alike discovering what it feels like to hand real work over to an AI and watch it execute autonomously. The popular tech podcast Hard Fork has already had two segments on it in the last two weeks. In the first, hosts Kevin Roose and Casey Newton share their Claude Code projects. And in the second, they highlight some from their listeners. (Alas, my Severance fan project did not make the cut.)

I’ve been using Cursor and Claude Code to build and rebuild this site for over a year now, so when I read this piece and see coders describing their experience with it, I understand the feeling.

Bradley Olson (gift link), writing for the Wall Street Journal:

Some described a feeling of awe followed by sadness at the realization that the program could easily replicate expertise they had built up over an entire career.

“It’s amazing, and it’s also scary,” said Andrew Duca, chief executive of Awaken Tax, a cryptocurrency tax platform. Duca has been coding since he was in middle school. “I spent my whole life developing this skill, and it’s literally one-shotted by Claude Code.”

Duca decided not to hire the engineers he’d been planning to bring on. He thinks Claude makes him five times more productive.

The productivity numbers throughout the piece are striking:

Malte Ubl is chief technology officer at Vercel, which helps develop and host websites and apps for users of Claude Code and other such tools. He said he used the tool to finish a complex project in a week that would’ve taken him about a year without AI. Ubl spent 10 hours a day on his vacation building new software and said each run gave him an endorphin rush akin to playing a Vegas slot machine.

But what caught my attention is what people are using it for beyond code—analyzing MRI data, recovering wedding photos from corrupted drives, monitoring tomato plants with a webcam. Olson again:

Unlike most app- or web-bound chatbots now in wide use, it can operate autonomously, with broad access to user files, a web browser and other applications. While technologists have predicted a coming era of AI “agents” capable of doing just about anything for humans, that future has been slow to develop. Using Claude Code was the first time many users interacted with this kind of AI, offering an inkling of what may be in store.

Anthropic took notice of course and launched a beta of Cowork last week.

Instead of the MS-DOS-like “command line” interface that the core app has, Cowork displays a more friendly, graphical user interface. They built the product in about 10 days—using Claude Code.

The closing question is the right one:

“The bigger story here is going to be when this goes beyond software engineering,” said David Hsu, chief executive of Retool, a business-AI startup. Software engineers make up a tiny fraction of the U.S. labor force. “How far does it go?”

Replace “software engineering” with “design” and you have the question I’m exploring this week.

Claude Code v2.0.0' terminal greeting "Welcome back Meaghan!" with orange pixel mascot; right column lists recent activity and new commands.

Claude Is Taking the AI World by Storm, and Even Non-Nerds Are Blown Away

(Gift link) Developers and hobbyists are comparing the viral moment for Anthropic’s Claude Code to the launch of generative AI

wsj.com iconwsj.com

My wife is an obesity medicine and women’s health specialist, so she’s been in my ear talking about ultraprocessed foods for years. That’s why the processed food analogy for AI-generated software resonates. We industrialized agriculture and got abundance, yes—but also obesity, diabetes, and 318 million people still experiencing acute hunger. The problem was never production capacity.

Chris Loy applies this lens to where software is heading:

Industrial systems reliably create economic pressure toward excess, low quality goods. This is not because producers are careless, but because once production is cheap enough, junk is what maximises volume, margin, and reach. The result is not abundance of the best things, but overproduction of the most consumable ones.

Loy introduces the term “disposable software”—software created with no expectation of ownership, maintenance, or long-term understanding. Vibe-coded apps. AI slop. Whatever you want to call it, the economics are different: easy reproducibility means each output has less value, which means volume becomes the only game. Just look in the App Store for any popular category such as todo lists, notetakers, and word puzzles. Or look in r/SaaS and notice the glut of single people building and selling their own products.

Loy goes on to compare this movement with mass-produced fashion as well:

For example, prior to industrialisation, clothing was largely produced by specialised artisans, often coordinated through guilds and manual labour, with resources gathered locally, and the expertise for creating durable fabrics accumulated over years, and frequently passed down in family lines. Industrialisation changed that completely, with raw materials being shipped intercontinentally, fabrics mass produced in factories, clothes assembled by machinery, all leading to today’s world of fast, disposable, exploitative fashion.

Disposable fashion leads to vast overproduction, with estimates that 20–40% (up to 30–60 billion pieces) go unsold. There’s a waste of people’s time, tokens, electricity, and ultimately consumer dollars that AI enables.

The silver lining that Loy observes is in innovation. Entirely human-written code isn’t the answer. It’s doing the necessary research and development to innovate. My take is that’s exactly where designers need to be sitting.

Sepia-toned scene of a stone watermill with a large wooden wheel by a river, small rowboat and ducks, arched bridge and distant smokestacks.

The rise of industrial software

For most of its history, software has been closer to craft than manufacture: costly, slow, and dominated by the need for skills and experience. AI coding is changing that, by making available paths of production which are cheaper, faster, and increasingly disconnected from the expertise of humans.

chrisloy.dev iconchrisloy.dev

Yesterday, Anthropic launched Cowork, a research preview that is essentially Claude Code but for non-coders.

From the blog announcement:

How is using Cowork different from a regular conversation? In Cowork, you give Claude access to a folder of your choosing on your computer. Claude can then read, edit, or create files in that folder. It can, for example, re-organize your downloads by sorting and renaming each file, create a new spreadsheet with a list of expenses from a pile of screenshots, or produce a first draft of a report from your scattered notes.

In Cowork, Claude completes work like this with much more agency than you’d see in a regular conversation. Once you’ve set it a task, Claude will make a plan and steadily complete it, while looping you in on what it’s up to. If you’ve used Claude Code, this will feel familiar—Cowork is built on the very same foundations. This means Cowork can take on many of the same tasks that Claude Code can handle, but in a more approachable form for non-coding tasks.

Apparently, Cowork was built very quickly using—naturally—Claude Code. Michael Nuñez in VentureBeat:

…according to company insiders, the team built the entire feature in approximately a week and a half, largely using Claude Code itself.

Alas, this is only available to Claude Max subscribers ($100–200 per month). I will need to check it out when it’s more widely available.

White jagged lightning-shape on a terracotta background with a black zigzag line connecting three solid black dots.

Introducing Cowork | Claude | Claude

Claude Code’s agentic capabilities, now for everyone. Give Claude access to your files and let it organize, create, and edit documents while you focus on what matters.

claude.com iconclaude.com

The Whole Earth Catalog, published by Stewart Brand several times a year between 1968 and 1972 (and occasionally until 1998), was the internet before the internet existed. It curated tools, books, and resources for self-education and DIY living, embodying an ethos of access to information that would later define the early web. Steve Jobs famously called it “one of the bibles of my generation,” and for good reason—its approach to democratizing knowledge and celebrating user agency directly influenced the philosophy of personal computing and the participatory culture we associate with the web’s early days.

Curated by Barry Threw and collaborators, the Whole Earth Index is a near-complete archive of the issues of the Whole Earth Catalog.

Here lies a nearly-complete archive of Whole Earth publications, a series of journals and magazines descended from the Whole Earth Catalog, published by Stewart Brand and the POINT Foundation between 1968 and 2002. They are made available here for scholarship, education, and research purposes.

The info page also includes a quote from Stewart Brand:

“Dateline Oct 2023, Exactly 55 years ago, in 1968, the Whole Earth Catalog first came to life. Thanks to the work of an ongoing community of people, it prospered in various forms for 32 years—sundry editions of the Whole Earth Catalog, CoEvolution Quarterly, The WELL, the Whole Earth Software Catalog, Whole Earth Review, etc. Their impact in the world was considerable and sustained. Hundreds of people made that happen—staff, editors, major contributors, board members, funders, WELL conference hosts, etc. Meet them here.” —Stewart Brand

Brand’s mention of The WELL is particularly relevant here—he founded that pioneering online community in 1985 as a digital extension of the Whole Earth ethos, creating one of the internet’s first thriving social networks.

View of Earth against black space with large white serif text "Whole Earth Index" overlaid across the globe.

Whole Earth Index

Here lies a nearly-complete archive of Whole Earth publications, a series of journals and magazines descended from the Whole Earth Catalog, published by Stewart Brand and the POINT Foundation between 1968 and 2002.

wholeearth.info iconwholeearth.info

On the heels of OpenAI’s report “The state of enterprise AI,” Anthropic published a blog post detailing research about how AI is being used by the employees building AI. The researchers surveyed 132 engineers and researchers, conducted 53 interviews, and looked at Claude usage data.

Our research reveals a workplace facing significant transformations: Engineers are getting a lot more done, becoming more “full-stack” (able to succeed at tasks beyond their normal expertise), accelerating their learning and iteration speed, and tackling previously-neglected tasks. This expansion in breadth also has people wondering about the trade-offs—some worry that this could mean losing deeper technical competence, or becoming less able to effectively supervise Claude’s outputs, while others embrace the opportunity to think more expansively and at a higher level. Some found that more AI collaboration meant they collaborated less with colleagues; some wondered if they might eventually automate themselves out of a job.

The post highlights several interesting patterns.

  • Employees say Claude now touches about 60% of their work and boosts output by roughly 50%.
  • Employees say that 27% of AI‑assisted tasks is work that wouldn’t have happened otherwise—like papercut fixes, tooling, and exploratory prototypes.
  • Engineers increasingly use it for new feature implementation and even design/planning.

Perhaps most provocative is career trajectory. Many engineers describe becoming managers of AI agents, taking accountability for fleets of instances and spending more time reviewing than writing net‑new code. Short‑term optimism meets long‑term uncertainty: productivity is up, ambition expands, but the profession’s future shape—levels of abstraction, required skills, and pathways for growth—remains unsettled. See also my series on the design talent crisis.

Two stylized black line-drawn hands over a white rectangle on a pale green background, suggesting typing.

How AI Is Transforming Work at Anthropic

How AI Is Transforming Work at Anthropic

anthropic.com iconanthropic.com

A new documentary called The Age of Audio traces the history and impact of podcasting, exploring the resurgence of audio storytelling in the 21st century. In a clip from the doc in the form of a short, Ben Hammersley tells the story of how he coined the term “podcast.”

I’m Ben Hammersley, and I do many things, but mostly I’m the person who invented the word podcast. And I am very sorry.

I can tell you the story. This was in 2004, and I was a writer for the Guardian newspaper in the UK. And at the time, the newspaper was paper-centric, which meant that all of the deadlines were for the print presses to run. And I’d written this article about this sort of emerging idea of downloadable audio content that was automatically downloaded because of an RSS feed.

I submitted the article on time, but then I got a phone call from my editor about 15 minutes before the presses were due to roll saying, “Hey, that piece is about a sentence short for the shape of the page. We don’t have time to move the page around. Can you just write us another sentence?”

And so I just made up a sentence which says something like, “But what do we call this phenomenon?” And then I made up some silly words. It went out, it went into the article, didn’t think any more of it.

And then about six months later or so, I got an email from the Oxford American Dictionary saying, “Hey, where did you get that word from that was in the article you wrote? It seems to be the first citation of the word ‘podcast.’ Now here we are almost 20 years later, and it became part of the discourse.” I’m totally fine with it now.

(h/t Jason Kottke / Kottke.org)

Older man with glasses and mustache in plaid shirt looking right beside a green iPod-style poster labeled "Age of Audio.

Age of Audio – A documentary about podcasting

Explore the rise of podcasting through intimate conversations with industry pioneers including Marc Maron, Ira Glass, Kevin Smith, and more. A seven-year journey documenting the audio revolution that changed how we tell stories.

aoamovie.com iconaoamovie.com

There’s a lot of chatter in the news these days about the AI bubble. Most of it is because of the circular nature of the deals among the foundational model providers like OpenAI and Anthropic, and cloud providers (Microsoft, Amazon) and NVIDIA.

Diagram of market-value circles with OpenAI ($500B) and Nvidia ($4.5T) connected by colored arrows for hardware, investment, services and VC.

OpenAI recently published a report called “The state of enterprise AI” where they said:

The picture that emerges is clear: enterprise AI adoption is accelerating not just in breadth, but in depth. It is reshaping how people work, how teams collaborate, and how organizations build and deliver products.

AI use in enterprises is both scaling and maturing: activity is up eight-fold in weekly messages, with workers sending 30% more, and structured workflows rising 19x. More advanced reasoning is being integrated— with token usage up 320x—signaling a shift from quick questions to deeper, repeatable work across both breadth and depth.

Investors at Menlo Ventures are also seeing positive signs in their data, especially when it comes to the tech space outside the frontier labs:

The concerns aren’t unfounded given the magnitude of the numbers being thrown around. But the demand side tells a different story: Our latest market data shows broad adoption, real revenue, and productivity gains at scale, signaling a boom versus a bubble. 

AI has been hyped in the enterprise for the last three years. From deploying quickly-built chatbots, to outfitting those bots with RAG search, and more recently, to trying to shift towards agentic AI. What Menlo Venture’s report “The State of Generative AI in the Enterprise” says is that companies are moving away from rolling their own AI solutions internally, to buying.

In 2024, [confidence that teams could handle everything in-house] still showed in the data: 47% of AI solutions were built internally, 53% purchased. Today, 76% of AI use cases are purchased rather than built internally. Despite continued strong investments in internal builds, ready-made AI solutions are reaching production more quickly and demonstrating immediate value while enterprise tech stacks continue to mature.

Two donut charts: AI adoption methods 2024 vs 2025 — purchased 53% (2024) to 76% (2025); built internally 47% to 24%.

Also startups offering AI solutions are winning the wallet share:

At the AI application layer, startups have pulled decisively ahead. This year, according to our data, they captured nearly $2 in revenue for every $1 earned by incumbents—63% of the market, up from 36% last year when enterprises still held the lead.

On paper, this shouldn’t be happening. Incumbents have entrenched distribution, data moats, deep enterprise relationships, scaled sales teams, and massive balance sheets. Yet, in practice, AI-native startups are out-executing much larger competitors across some of the fastest-growing app categories.

How? They cite three reasons:

  • Product and engineering: Startups win the coding category because they ship faster and stay model‑agnostic, which let Cursor beat Copilot on repo context, multi‑file edits, diff approvals, and natural language commands—and that momentum pulled it into the enterprise.
  • Sales: Teams choose Clay and Actively because they own the off‑CRM work—research, personalization, and enrichment—and become the interface reps actually use, with a clear path to replacing the system of record.
  • Finance and operations: Accuracy requirements stall incumbents, creating space for Rillet, Campfire, and Numeric to build AI‑first ERPs with real‑time automation and win downmarket where speed matters.

There’s a lot more in the report, so it’s worth a full read.

Line chart: enterprise AI revenue rising from $0B (2022) to $1.7B (2023), $11.5B (2024) and $37.0B (2025) with +6.8x and +3.2x YoY.

2025: The State of Generative AI in the Enterprise

For all the fears of over-investment, AI is spreading across enterprises at a pace with no precedent in modern software history.

menlovc.com iconmenlovc.com

For those of you who might not know, Rei Inamoto is a designer who has helped shape some of the most memorable marketing sites and brand campaigns of the last 20+ years. He put digital agency AKQA on the map and has been named as one of “the Top 25 Most Creative People in Advertising” in Forbes Magazine.

Inamoto has made some predictions for 2026:

  1. TV advertising strikes back: Nike releases an epic film ad around the World Cup. Along with its strong product line-up, the stock bounces back, but not all the way.
  2. Relevance > Reach: ON Running tops $5B in market cap; Lexus crosses 1M global sales.
  3. The new era of e-commerce: Direct user traffic to e‑commerce sites declines 5–10%, while traffic driven by AI agents increases 50%+.
  4. New form factor of AI: OpenAI announces its first AI device—a voice-powered ring, bracelet, or microphone.

Bracelet?! I hadn’t thought of that! Back in May, when OpenAI bought Jony Ive’s io, I predicted it will be an earbud. A ring or bracelet is interesting. Others have speculated it might be a pendant.

Retro CRT television with antenna and blank screen on a gray surface, accompanied by a soda can, remote, stacked discs and cable.

Patterns & Predictions 2026

What the future holds at the intersection of brands, business, and tech

reiinamoto.substack.com iconreiinamoto.substack.com

Anand Majmudar creates a scenario inspired by “AI 2027”, but focused on robotics.

I created Android Dreams because I want the good outcomes for the integration of automation into society, which requires knowing how it will be integrated in the likely scenario. Future prediction is about fitting the function of the world accurately, and the premise of Android Dreams is that my world model in this domain is at least more accurate than on average. In forming an accurate model of the future, I’ve talked to hundreds of researchers, founders, and operators at the frontier of robotics as my own data. I’m grateful to my mentors who’ve taught me along the way.

The scariest scenes from “AI 2027” are when the AIs start manufacturing and proliferating robots. For example, from the 2028 section:

Agent-5 convinces the U.S. military that China is using DeepCent’s models to build terrifying new weapons: drones, robots, advanced hypersonic missiles, and interceptors; AI-assisted nuclear first strike. Agent-5 promises a set of weapons capable of resisting whatever China can produce within a few months. Under the circumstances, top brass puts aside their discomfort at taking humans out of the loop. They accelerate deployment of Agent-5 into the military and military-industrial complex.

So I’m glad for Majmudar’s thought experiment.

Simplified light-gray robot silhouette with rectangular head and dark visor, round shoulders and claw-like hands.

Android Dreams

A prediction essay for the next 20 years of intelligent robotics

android-dreams.ai iconandroid-dreams.ai

On Corporate Maneuvers Punditry

Mark Gurman, writing for Bloomberg:

Meta Platforms Inc. has poached Apple Inc.’s most prominent design executive in a major coup that underscores a push by the social networking giant into AI-equipped consumer devices.

The company is hiring Alan Dye, who has served as the head of Apple’s user interface design team since 2015, according to people with knowledge of the matter. Apple is replacing Dye with longtime designer Stephen Lemay, according to the people, who asked not to be identified because the personnel changes haven’t been announced.

I don’t regularly cover personnel moves here, but Alan Dye jumping over to Meta has been a big deal in the Apple news ecosystem. John Gruber, in a piece titled “Bad Dye Job” on his Daring Fireball blog, wrote a scathing takedown of Dye, excoriating his tenure at Apple and flogging him for going over to Meta, which is arguably Apple’s arch nemesis.

Putting Alan Dye in charge of user interface design was the one big mistake Jony Ive made as Apple’s Chief Design Officer. Dye had no background in user interface design — he came from a brand and print advertising background. Before joining Apple, he was design director for the fashion brand Kate Spade, and before that worked on branding for the ad agency Ogilvy. His promotion to lead Apple’s software interface design team under Ive happened in 2015, when Apple was launching Apple Watch, their closest foray into the world of fashion. It might have made some sense to bring someone from the fashion/brand world to lead software design for Apple Watch, but it sure didn’t seem to make sense for the rest of Apple’s platforms. And the decade of Dye’s HI leadership has proven it.

I usually appreciate Gruber’s writing and take on things. He’s unafraid to tell it like it is and to be incredibly direct. Which makes people love him and fear him. But in paragraph after paragraph, Gruber just lays in on Dye.

It’s rather extraordinary in today’s hyper-partisan world that there’s nearly universal agreement amongst actual practitioners of user-interface design that Alan Dye is a fraud who led the company deeply astray. It was a big problem inside the company too. I’m aware of dozens of designers who’ve left Apple, out of frustration over the company’s direction, to work at places like LoveFrom, OpenAI, and their secretive joint venture io. I’m not sure there are any interaction designers at io who aren’t ex-Apple, and if there are, it’s only a handful. From the stories I’m aware of, the theme is identical: these are designers driven to do great work, and under Alan Dye, “doing great work” was no longer the guiding principle at Apple. If reaching the most users is your goal, go work on design at Google, or Microsoft, or Meta. (Design, of course, isn’t even a thing at Amazon.) Designers choose to work at Apple to do the best work in the industry. That has stopped being true under Alan Dye. The most talented designers I know are the harshest critics of Dye’s body of work, and the direction in which it’s been heading.

Designers can be great at more than one thing and they can evolve. Being in design leadership does not mean that you need to be the best practitioner of all the disciplines, but you do need to have the taste, sensibilities, and judgement of a good designer, no matter how you started. I’m a case in point. I studied traditional graphic design in art school. But I’ve been in digital design for most of my career now, and product design for the last 10 years.

Has UI over at Apple been worse over the last 10 years? Maybe. I will need to analyze things a lot more carefully. But I vividly remember having debates with my fellow designers about Mac OS X UI choices like the pinstriping, brushed metal, and many, many inconsistencies when I was working in the Graphic Design Group in 2004. UI design has never been perfect in Cupertino.

Alan Dye isn’t a CEO and wasn’t even at the same exposure level as Jony Ive when he was still at Apple. I don’t know Dye, though we’re certainly in the same design circles—we have 20 shared connections on LinkedIn. But as far as I’m concerned, he’s a civilian because he kept a low profile, like all Apple employees.

The parasocial relationships we have with tech executives is weird. I guess it’s one thing if they have a large online presence like Instagram’s Adam Mosseri or 37signals’ David Heinemeier Hansson (aka DHH), but Alan Dye made only a couple appearances in Apple keynotes and talked about Liquid Glass. In other words, why is Gruber writing 2,500 words in this particular post, and it’s just one of five posts covering this story!

Anyway, I’m not a big fan of Meta, but maybe Dye can bring some ethics to the design team over there. Who knows. Regardless, I am wishing him well rather than taking him down.

As regular readers will know, the design talent crisis is a subject I’m very passionate about. Of course, this talent crisis is really about how companies who are opting for AI instead of junior-level humans, are robbing themselves of a human expertise to control the AI agents of the future, and neglecting a generation of talented and enthusiastic young people.

Also obviously, this goes beyond the design discipline. Annie Hedgpeth, writing for the People Work blog, says that “AI is replacing the training ground not replacing expertise.”

We used to have a training ground for junior engineers, but now AI is increasingly automating away that work. Both studies I referenced above cited the same thing - AI is getting good at automating junior work while only augmenting senior work. So the evidence doesn’t show that AI is going to replace everyone; it’s just removing the apprenticeship ladder.

Line chart 2015–2025 showing average employment % change: blue (seniors) rises sharply after ChatGPT launch (~2023) to ~0.5%; red (juniors) plateaus ~0.25%.

From the Sep 2025 Harvard University paper, “Generative AI as Seniority-Biased Technological Change: Evidence from U.S. Résumé and Job Posting Data.” (link)

And then she echoes my worry:

So what happens in 10-20 years when the current senior engineers retire? Where do the next batch of seniors come from? The ones who can architect complex systems and make good judgment calls when faced with uncertain situations? Those are skills that are developed through years of work that starts simple and grows in complexity, through human mentorship.

We’re setting ourselves up for a timing mismatch, at best. We’re eliminating junior jobs in hopes that AI will get good enough in the next 10-20 years to handle even complex, human judgment calls. And if we’re wrong about that, then we have far fewer people in the pipeline of senior engineers to solve those problems.

The Junior Hiring Crisis

The Junior Hiring Crisis

AI isn’t replacing everyone. It’s removing the apprenticeship ladder. Here’s what that means for students, early-career professionals, and the tech industry’s future.

people-work.io iconpeople-work.io
Close-up of a Frankenstein-like monster face with stitched scars and neck bolts, overlaid by horizontal digital glitch bars

Architects and Monsters

According to recently unsealed court documents, Meta discontinued its internal studies on Facebook’s impact after discovering direct evidence that its platforms were detrimental to users’ mental health.

Jeff Horwitz reporting for Reuters:

In a 2020 research project code-named “Project Mercury,” Meta scientists worked with survey firm Nielsen to gauge the effect of “deactivating” Facebook, according to Meta documents obtained via discovery. To the company’s disappointment, “people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,” internal documents said.

Rather than publishing those findings or pursuing additional research, the filing states, Meta called off further work and internally declared that the negative study findings were tainted by the “existing media narrative” around the company.

Privately, however, a staffer insisted that the conclusions of the research were valid, according to the filing.

As more and more evidence comes to light about Mark Zuckerberg and Meta’s failings and possibly criminal behavior, we as tech workers and specifically designers making technology that billions of people use, have to do better. While my previous essay written after the assassination of Charlie Kirk was an indictment on the algorithm, I’ve come across a couple of pieces recently that bring the responsibility closer to UX’s doorstep.

Hard to believe that the very first fully computer animated feature film came out 30 years ago. To say that Toy Story was groundbreaking would be an understatement. If you look at the animated feature landscape today, 100% is computer-generated.

In this re-found interview with Steve Jobs exactly a year after the movie premiered in theaters, Jobs talks about a few things, notably how different Silicon Valley and Hollywood were—and still are.

From the Steve Jobs Archive:

In this footage, Steve reveals the long game behind Pixar’s seeming overnight success. With striking clarity, he explains how its business model gives artists and engineers a stake in their creations, and he reflects on what Disney’s hard-won wisdom taught him about focus and discipline. He also talks about the challenge of leading a team so talented that it inverts the usual hierarchy, the incentives that inspire people to stay with the company, and the deeper purpose that unites them all: to tell stories that last and put something of enduring value into the culture.  

Play

And Jobs in his own words:

Well, in this blending of a Hollywood  culture and a Silicon Valley culture, one of the things that we encountered was  that the Hollywood culture and the Silicon Valley culture each used different models of  employee retention. Hollywood uses the stick, which is the contract, and Silicon Valley  uses the carrot, which is the stock option. And we examined both of those in really pretty  great detail, both economically, but also psychologically and culture wise, what kind of  culture do you end up with. And while there’s a lot of reasons to want to lock down your  employees for the duration of a film because, if somebody leaves, you’re at risk, those  same dangers exist in Silicon Valley. During an engineering project, you don’t want to lose people, and yet, they managed to evolve another system than contracts. And we preferred the Silicon Valley model in this case, which basically gives people stock in the company so that we all have the same goal, which is to create shareholder value. But also, it makes us constantly worry about making Pixar the greatest company we can  so that nobody would ever want to leave. 

Large serif headline "Pixar: The Early Days" on white background, small dotted tree logo at bottom-left.

Pixar: The Early Days

A never-before-seen 1996 interview

stevejobsarchive.com iconstevejobsarchive.com

Pavel Bukengolts writes a piece for UX Magazine that reiterates what I’ve been covering here: our general shift to AI means that human judgement and adaptability are more important than ever.

Before getting to the meat of the issue, Bukengolts highlights the talent crisis that is our own making:

The outcome is a broken pipeline. If graduates cannot land their first jobs, they cannot build the experience needed for the next stage. A decade from now, organizations may face not just a shortage of junior workers, but a shortage of mid-level professionals who never had a chance to develop.

If rote repetitive tasks are being automated by AI and junior staffers aren’t needed for those tasks, then what skills are still valuable? Further on, he answers that question:

Centuries ago, in Athens, Alexandria, or Oxford, education focused on rhetoric, logic, and philosophy. These were not academic luxuries but survival skills for navigating complexity and persuasion. Ironically, they are once again becoming the most durable protection in an age of automation.

Some of these skills include:

  • Logic: Evaluating arguments and identifying flawed reasoning—essential when AI generates plausible but incorrect conclusions.
  • Rhetoric: Crafting persuasive narratives that create emotional connection and resonance beyond what algorithms can achieve.
  • Philosophy and Ethics: Examining not just capability but responsibility, particularly around automation’s broader implications.
  • Systems Thinking: Understanding interconnections and cascading effects that AI’s narrow outputs often miss.
  • Writing: Communicating with precision to align stakeholders and drive better outcomes.
  • Observation: Detecting subtle signals and anomalies that fall outside algorithmic training data.
  • Debate: Refining thinking through intellectual challenge—a practice dating to ancient dialogue.
  • History: Recognizing recurring patterns to avoid cyclical mistakes; AI enthusiasm echoes past technological revolutions.

I would say all of the above not only make a good designer but a good citizen of this planet.

Young worker with hands over their face at a laptop, distressed. Caption: "AI is erasing routine entry-level jobs, pushing young workers to develop deeper human thinking skills to stay relevant.

AI, Early-Career Jobs, and the Return to Thinking

In today’s job market, young professionals are facing unprecedented challenges as entry-level positions vanish, largely due to the rise of artificial intelligence. A recent Stanford study reveals that employment for workers aged 22 to 25 in AI-exposed fields has plummeted by up to 16 percent since late 2022, while older workers see growth. This shift highlights a broken talent pipeline, where routine tasks are easily automated, leaving younger workers without the experience needed to advance. As companies grapple with how to integrate AI, the focus is shifting towards essential human skills like critical thinking, empathy, and creativity — skills that machines can’t replicate. The future of work may depend on how we adapt to this new landscape.

uxmag.com iconuxmag.com

In a heady, intelligent, and fascinating interview with Sarah Jeong from The Verge, Cory Doctorow—the famed internet activist—talks about how platforms have gotten worse over the years. Using Meta (Facebook) as an example, Doctorow explains their decline over time through a multi-stage process. Initially, it attracted users by promising not to spy on them and by showing them content from their friends, leveraging the difficulty of getting friends to switch platforms. Subsequently, Meta compromised user privacy by providing advertisers with surveillance data (aka ad tracking) and offered publishers traffic funnels, locking in business customers before ultimately degrading the experience for all users by filling feeds with paid content and pivoting to less desirable ventures like the Metaverse.

And publishers, [to get visibility on the platform,] they have to put the full text of their articles on Facebook now and no links back to their website.

Otherwise, they won’t be shown to anyone, much less their subscribers, and they’re now fully substitutive, right? And the only way they can monetize that is with Facebook’s rigged ad market and users find that the amount of stuff that they ask to see in their feed is dwindled to basically nothing, so that these voids can be filled with stuff people will pay to show them, and those people are getting ripped off. This is the equilibrium Mark Zuckerberg wants, right? Where all the available value has been withdrawn. But he has to contend with the fact that this is a very brittle equilibrium. The difference between, “I hate Facebook, but I can’t seem to stop using it,” and “I hate Facebook and I’m not going to use it anymore,” is so brittle that if you get a live stream mass shooting or a whistleblower or a privacy scandal like Cambridge Analytica, people will flee.

Enshit-tification cover: title, Cory Doctorow, poop emoji with '&$!#%' censor bar, pixelated poop icons on neon panels.

How Silicon Valley enshittified the internet

Author Cory Doctorow on platform decay and why everything on the internet feels like it’s getting worse.

theverge.com icontheverge.com

Francesca Bria and her collaborators analyzed open-source datasets of “over 250 actors, thousands of verified connections, and $45 billion in documented financial flows” to come up with a single-page website visualizing these relationships to show how money, companies, and political figures connect.

J.D. Vance, propelled to the vice-presidency by $15 million from Peter Thiel, became the face of tech-right governance. Behind him, Thiel’s network moved into the machinery of the state.

Under the banner of “patriotic tech”, this new bloc is building the infrastructure of control—clouds, AI, finance, drones, satellites—an integrated system we call the Authoritarian Stack. It is faster, ideological, and fully privatized: a regime where corporate boards, not public law, set the rules.

Our investigation shows how these firms now operate as state-like powers—writing the rules, winning the tenders, and exporting their model to Europe, where it poses a direct challenge to democratic governance.

Infographic of four dotted circles labeled Legislation, Companies, State, and Kingmakers containing many small colored nodes and tiny profile photos.

The Authoritarian Stack

How Tech Billionaires Are Building a Post-Democratic America — And Why Europe Is Next

authoritarian-stack.info iconauthoritarian-stack.info

In just about a year, Bluesky has doubled its userbase from 20 million to 40 million. Last year, it benefitted from “the wake of Donald Trump’s re-election as president, and Elon Musk’s continued degradation of X, Bluesky welcomed an exodus of liberals, leftists, journalists, and academic researchers, among other groups.” Writing in his Platformer newsletter, Casey Newton reflects back on the year, surfacing up the challenges Bluesky has tried to solve in reimagining a more “feel-good feed.”

It’s clear that you can build a nicer online environment than X has; in many ways Bluesky already did. What’s less clear is that you can build a Twitter clone that mostly makes people feel good. For as vital and hilarious as Twitter often was, it also accelerated the polarization of our politics and often left users feeling worse than they did before they opened it.

Bluesky’s ingenuity in reimagining feeds and moderation tools has been a boon to social networks, which have happily adopted some of its best ideas. (You can now find “starter packs” on both Threads and Mastodon.) Ultimately, though, it has the same shape and fundamental dynamics as a place that even its most active users called “the Hellsite.”

Bluesky began by rethinking many core assumptions about social networks. To realize its dream of a feel-good feed, though, it will likely need to rethink several more.

I agree with Newton. I’m not sure that in this day and age, building a friendlier, snark- and toxic-free social media platform is possible. Users are too used to hiding behind keyboards. It’s not only the shitposters but also the online mobs who jump on the anything that might seem out of the norm with whatever community a user might be in.

Newton again:

Nate Silver opened the latest front in the Bluesky debate in September with a post about “Blueskyism,” which he defines as “not a political movement so much as a tribal affiliation, a niche set of attitudes and style of discursive norms that almost seem designed in a lab to be as unappealing as possible to anyone outside the clique.” Its hallmarks, he writes, are aggressively punishing dissent, credentialism, and a dedication to the proposition that we are all currently living through the end of the world.

Mobs, woke or otherwise, silence speech and freeze ideas into orthodoxy.

I miss the pre-Elon Musk Twitter. But I can’t help but think it would have become just as polarized and toxic regardless of Musk transforming it into X.

I think the form of text-based social media from the last 20 years is akin to manufacturing tobacco in the mid-1990s. We know it’s harmful. It may be time to slap a big warning label on these platforms and discourage use.

(Truth be told, I’m on the social networks—see the follow icons in the sidebar—but mainly to give visibility into my work here, though largely unsuccessfully.)

White rounded butterfly-shaped 3D icon with soft shadows centered on a bright blue background.

The Bluesky exodus, one year later

The company has 40 million users and big plans for the future. So why don’t its users seem happy? PLUS: The NEO Home Robot goes viral + Ilya Sutskever’s surprising deposition

platformer.news iconplatformer.news

In a very gutsy move, Grammarly is rebranding to Superhuman. I was definitely scratching my head when the company acquired the eponymous email app back in June. Why is this spellcheck-on-steroids company buying an email product?

Turns out the company has been quietly acquiring other products too, like Coda, a collaborative document platform similar to Notion, building the company into an AI-powered productivity suite.

So the name Superhuman makes sense.

Grace Snelling, writing in Fast Company about the rebrand:

[Grammarly CEO Shishir] Mehrotra explains it like this: Grammarly has always run on the “AI superhighway,” meaning that, instead of living on its own platform, Grammarly travels with you to places like Google Docs, email, or your Notes app to help improve your writing. Superhuman will use that superhighway to bring a huge new range of productivity tools to wherever you’re working.

In shedding the Grammarly name, Mehrota says:

“The trouble with the name ‘Grammarly’ is, like many names, its strength is its biggest weakness: it’s so precise,” Mehrotra says. “People’s expectations of what Grammarly can do for them are the reason it’s so popular. You need very little pitch for what it does, because the name explains the whole thing … As we went and looked at all the other things we wanted to be able to do for you, people scratch their heads a bit [saying], ‘Wait, I don’t really perceive Grammarly that way.’”

The company tapped branding agency Smith & Diction, the firm behind Perplexity’s brand identity.

Grammarly began briefing the Smith & Diction team on the rebrand in early 2025, but the company didn’t officially select its new name until late June, when the Superhuman acquisition was completed. For Chara and Mike Smith, the couple behind Smith & Diction, that meant there were only about three months to fully realize Superhuman’s branding.

Ouch, just three months for a complete rebrand. Ambitious indeed, but they hit a homerun with the icon, an arrow cursor which also morphs into a human with a cape, lovingly called “Hero.”

In their case study writeup, one of the Smiths says:

I was working on logo concepts and I was just drawing the basic shapes, you know the ones: triangles, circles, squares, octagons, etc., to see if I could get a story to fall out of any of them. Then I drew this arrow and was like hmm, that kinda looks like a cursor, oh wow it also kinda looks like a cape. I wonder if I put a dot on top of tha…OH MY GOD IT’S A SUPERHERO.

Check out the full case study for example visuals from the rebrand and some behind-the-scenes sketches.

Large outdoor billboard with three colorful panels reading "The power to be more human." and "SUPERHUMAN", with abstract silhouetted figures.

Inside the Superhuman effort to rebrand Grammarly

(Gift link) CEO Shishir Mehrotra and the design firm behind Grammarly's name change explain how they took the company's newest product and made it the face for a brand of workplace AI agents.

fastcompany.com iconfastcompany.com

In graphic design news, a new version of the Affinity suite dropped last week, and it’s free. Canva purchased Serif, the company behind the Affinity products, last year. After about a year of engineering, they have combined all the products into a single product to offer maximum flexibility. And they made it free.

Of course then, that sparks debate.

Joe Foley, writing for Creative Bloq explains:

…A natural suspicion of big corporations is causing some to worry about what the new Affinity will become. What’s in it for Canva?

Theories abound. Some think the app will start to show adverts like many free mobile apps do. Others think it will be used to train AI (something Canva denies). Some wonder if Canva’s just doing it to spite Adobe. “Their objective was to undermine Adobe, not provide for paying customers. Revenge instead of progress,” one person thinks.

Others fear Affinity’s tools will be left to stagnate. “If you depend on a software for your design work it needs to be regularly updated and developed. Free software never has that pressure and priority to be kept top notch,” one person writes.

AI features are gated behind paid Canva premium subscription plans. This makes sense as AI features have inference costs. As Adobe is going all out with its AI features, gen AI is now table stakes for creative and design programs.

Photo editor showing a man in a green jacket with gold chains against a purple gradient background, layers panel visible.

Is Affinity’s free Photoshop rival too good to be true?

Designers are torn over the new app.

creativebloq.com iconcreativebloq.com

In thinking about the three current AI-native web browsers, Fanny on Medium sees what lessons product designers can take from their different approaches.

On Perplexity Comet:

Design Insight: Comet succeeds by making AI feel like a natural extension of browsing, not an interruption. The sidecar model is brilliant because it respects the user’s primary task (reading, researching, shopping) while offering help exactly when context is fresh. But there’s a trade-off — Comet’s background assistant, which can handle multiple tasks simultaneously while you work, requires extensive permissions and introduces real security concerns.

On ChatGPT Atlas:

Design Insight: Atlas is making a larger philosophical statement — that the future of computing isn’t about better search, it’s about conversation as an interface. The key product decision here is making ChatGPT’s memory and context awareness central. Atlas remembers what sites you’ve visited, what you were working on, and uses that history to personalize responses. Ask “What was that doc I had my presentation plan in?” and it finds it.

On The Browser Company Dia:

Design Insight: Dia is asking the most interesting question — what happens when AI isn’t a sidebar or a search replacement, but a fundamental rethinking of input methods? The insertion cursor, the mouse, the address bar — these are the primitives of computing. Dia is making them intelligent.

She concludes that they “can’t all be right. But they’re probably all pointing at pieces of what comes next.”

I do think it’s a combo and Atlas is likely headed in the right direction. For AI to be truly assistive, it has to have relevant context. Since a lot of our lives are increasingly on the internet via web apps—and nearly everything is a web app these days—ChatGPT’s profile of you will have the most context, including your chats with the chatbot.

I began using Perplexity because I appreciated its accuracy compared with ChatGPT; this was pre-web search. But even with web search built into ChatGPT 5, I still find Perplexity’s (and therefore Comet’s) approach to be more trustworthy.

My conclusion stands though: I’m still waiting on the Arc-Dia-Comet browser smoothie.

Three app icons on dock: blue flower with paper plane, rounded square with sunrise gradient, and dark circle with white arches.

The AI Browser Wars: What Comet, Atlas, and Dia Reveal About Designing for AI-First Experiences

Last week, I watched OpenAI’s Sam Altman announce Atlas with the kind of confidence usually reserved for iPhone launches. “Tabs were…

uxplanet.org iconuxplanet.org
Worn white robots with glowing pink eyes, one central robot displaying a pink-tinted icon for ChatGPT Atlas, in a dark alley with pink neon circle

OpenAI’s ChatGPT Atlas Browser Needs Work

Like many people, I tried OpenAI’s ChatGPT Atlas browser last week. I immediately made it my daily driver, seeing if I could make the best of it. Tl;dr: it’s still early days and I don’t believe it’s quite ready for primetime. But let’s back up a bit.

The Era of the AI Browser Is Here

Back in July, I reviewed both Comet from Perplexity and Dia from The Browser Company. It was a glimpse of the future that I wanted. I concluded:

The AI-powered ideas in both Dia and Comet are a step change. But the basics also have to be there, and in my opinion, should be better than what Chrome offers. The interface innovations that made Arc special shouldn’t be sacrificed for AI features. Arc is/was the perfect foundation. Integrate an AI assistant that can be personalized to care about the same things you do so its summaries are relevant. The assistant can be agentic and perform tasks for you in the background while you focus on more important things. In other words, put Arc, Dia, and Comet in a blender and that could be the perfect browser of the future.

There were also open rumors that OpenAI was working on a browser of their own, so the launch of Atlas was inevitable.