Skip to content

89 posts tagged with “tech industry”

The Whole Earth Catalog, published by Stewart Brand several times a year between 1968 and 1972 (and occasionally until 1998), was the internet before the internet existed. It curated tools, books, and resources for self-education and DIY living, embodying an ethos of access to information that would later define the early web. Steve Jobs famously called it “one of the bibles of my generation,” and for good reason—its approach to democratizing knowledge and celebrating user agency directly influenced the philosophy of personal computing and the participatory culture we associate with the web’s early days.

Curated by Barry Threw and collaborators, the Whole Earth Index is a near-complete archive of the issues of the Whole Earth Catalog.

Here lies a nearly-complete archive of Whole Earth publications, a series of journals and magazines descended from the Whole Earth Catalog, published by Stewart Brand and the POINT Foundation between 1968 and 2002. They are made available here for scholarship, education, and research purposes.

The info page also includes a quote from Stewart Brand:

“Dateline Oct 2023, Exactly 55 years ago, in 1968, the Whole Earth Catalog first came to life. Thanks to the work of an ongoing community of people, it prospered in various forms for 32 years—sundry editions of the Whole Earth Catalog, CoEvolution Quarterly, The WELL, the Whole Earth Software Catalog, Whole Earth Review, etc. Their impact in the world was considerable and sustained. Hundreds of people made that happen—staff, editors, major contributors, board members, funders, WELL conference hosts, etc. Meet them here.” —Stewart Brand

Brand’s mention of The WELL is particularly relevant here—he founded that pioneering online community in 1985 as a digital extension of the Whole Earth ethos, creating one of the internet’s first thriving social networks.

View of Earth against black space with large white serif text "Whole Earth Index" overlaid across the globe.

Whole Earth Index

Here lies a nearly-complete archive of Whole Earth publications, a series of journals and magazines descended from the Whole Earth Catalog, published by Stewart Brand and the POINT Foundation between 1968 and 2002.

wholeearth.info iconwholeearth.info

On the heels of OpenAI’s report “The state of enterprise AI,” Anthropic published a blog post detailing research about how AI is being used by the employees building AI. The researchers surveyed 132 engineers and researchers, conducted 53 interviews, and looked at Claude usage data.

Our research reveals a workplace facing significant transformations: Engineers are getting a lot more done, becoming more “full-stack” (able to succeed at tasks beyond their normal expertise), accelerating their learning and iteration speed, and tackling previously-neglected tasks. This expansion in breadth also has people wondering about the trade-offs—some worry that this could mean losing deeper technical competence, or becoming less able to effectively supervise Claude’s outputs, while others embrace the opportunity to think more expansively and at a higher level. Some found that more AI collaboration meant they collaborated less with colleagues; some wondered if they might eventually automate themselves out of a job.

The post highlights several interesting patterns.

  • Employees say Claude now touches about 60% of their work and boosts output by roughly 50%.
  • Employees say that 27% of AI‑assisted tasks is work that wouldn’t have happened otherwise—like papercut fixes, tooling, and exploratory prototypes.
  • Engineers increasingly use it for new feature implementation and even design/planning.

Perhaps most provocative is career trajectory. Many engineers describe becoming managers of AI agents, taking accountability for fleets of instances and spending more time reviewing than writing net‑new code. Short‑term optimism meets long‑term uncertainty: productivity is up, ambition expands, but the profession’s future shape—levels of abstraction, required skills, and pathways for growth—remains unsettled. See also my series on the design talent crisis.

Two stylized black line-drawn hands over a white rectangle on a pale green background, suggesting typing.

How AI Is Transforming Work at Anthropic

How AI Is Transforming Work at Anthropic

anthropic.com iconanthropic.com

A new documentary called The Age of Audio traces the history and impact of podcasting, exploring the resurgence of audio storytelling in the 21st century. In a clip from the doc in the form of a short, Ben Hammersley tells the story of how he coined the term “podcast.”

I’m Ben Hammersley, and I do many things, but mostly I’m the person who invented the word podcast. And I am very sorry.

I can tell you the story. This was in 2004, and I was a writer for the Guardian newspaper in the UK. And at the time, the newspaper was paper-centric, which meant that all of the deadlines were for the print presses to run. And I’d written this article about this sort of emerging idea of downloadable audio content that was automatically downloaded because of an RSS feed.

I submitted the article on time, but then I got a phone call from my editor about 15 minutes before the presses were due to roll saying, “Hey, that piece is about a sentence short for the shape of the page. We don’t have time to move the page around. Can you just write us another sentence?”

And so I just made up a sentence which says something like, “But what do we call this phenomenon?” And then I made up some silly words. It went out, it went into the article, didn’t think any more of it.

And then about six months later or so, I got an email from the Oxford American Dictionary saying, “Hey, where did you get that word from that was in the article you wrote? It seems to be the first citation of the word ‘podcast.’ Now here we are almost 20 years later, and it became part of the discourse.” I’m totally fine with it now.

(h/t Jason Kottke / Kottke.org)

Older man with glasses and mustache in plaid shirt looking right beside a green iPod-style poster labeled "Age of Audio.

Age of Audio – A documentary about podcasting

Explore the rise of podcasting through intimate conversations with industry pioneers including Marc Maron, Ira Glass, Kevin Smith, and more. A seven-year journey documenting the audio revolution that changed how we tell stories.

aoamovie.com iconaoamovie.com

There’s a lot of chatter in the news these days about the AI bubble. Most of it is because of the circular nature of the deals among the foundational model providers like OpenAI and Anthropic, and cloud providers (Microsoft, Amazon) and NVIDIA.

Diagram of market-value circles with OpenAI ($500B) and Nvidia ($4.5T) connected by colored arrows for hardware, investment, services and VC.

OpenAI recently published a report called “The state of enterprise AI” where they said:

The picture that emerges is clear: enterprise AI adoption is accelerating not just in breadth, but in depth. It is reshaping how people work, how teams collaborate, and how organizations build and deliver products.

AI use in enterprises is both scaling and maturing: activity is up eight-fold in weekly messages, with workers sending 30% more, and structured workflows rising 19x. More advanced reasoning is being integrated— with token usage up 320x—signaling a shift from quick questions to deeper, repeatable work across both breadth and depth.

Investors at Menlo Ventures are also seeing positive signs in their data, especially when it comes to the tech space outside the frontier labs:

The concerns aren’t unfounded given the magnitude of the numbers being thrown around. But the demand side tells a different story: Our latest market data shows broad adoption, real revenue, and productivity gains at scale, signaling a boom versus a bubble. 

AI has been hyped in the enterprise for the last three years. From deploying quickly-built chatbots, to outfitting those bots with RAG search, and more recently, to trying to shift towards agentic AI. What Menlo Venture’s report “The State of Generative AI in the Enterprise” says is that companies are moving away from rolling their own AI solutions internally, to buying.

In 2024, [confidence that teams could handle everything in-house] still showed in the data: 47% of AI solutions were built internally, 53% purchased. Today, 76% of AI use cases are purchased rather than built internally. Despite continued strong investments in internal builds, ready-made AI solutions are reaching production more quickly and demonstrating immediate value while enterprise tech stacks continue to mature.

Two donut charts: AI adoption methods 2024 vs 2025 — purchased 53% (2024) to 76% (2025); built internally 47% to 24%.

Also startups offering AI solutions are winning the wallet share:

At the AI application layer, startups have pulled decisively ahead. This year, according to our data, they captured nearly $2 in revenue for every $1 earned by incumbents—63% of the market, up from 36% last year when enterprises still held the lead.

On paper, this shouldn’t be happening. Incumbents have entrenched distribution, data moats, deep enterprise relationships, scaled sales teams, and massive balance sheets. Yet, in practice, AI-native startups are out-executing much larger competitors across some of the fastest-growing app categories.

How? They cite three reasons:

  • Product and engineering: Startups win the coding category because they ship faster and stay model‑agnostic, which let Cursor beat Copilot on repo context, multi‑file edits, diff approvals, and natural language commands—and that momentum pulled it into the enterprise.
  • Sales: Teams choose Clay and Actively because they own the off‑CRM work—research, personalization, and enrichment—and become the interface reps actually use, with a clear path to replacing the system of record.
  • Finance and operations: Accuracy requirements stall incumbents, creating space for Rillet, Campfire, and Numeric to build AI‑first ERPs with real‑time automation and win downmarket where speed matters.

There’s a lot more in the report, so it’s worth a full read.

Line chart: enterprise AI revenue rising from $0B (2022) to $1.7B (2023), $11.5B (2024) and $37.0B (2025) with +6.8x and +3.2x YoY.

2025: The State of Generative AI in the Enterprise

For all the fears of over-investment, AI is spreading across enterprises at a pace with no precedent in modern software history.

menlovc.com iconmenlovc.com

For those of you who might not know, Rei Inamoto is a designer who has helped shape some of the most memorable marketing sites and brand campaigns of the last 20+ years. He put digital agency AKQA on the map and has been named as one of “the Top 25 Most Creative People in Advertising” in Forbes Magazine.

Inamoto has made some predictions for 2026:

  1. TV advertising strikes back: Nike releases an epic film ad around the World Cup. Along with its strong product line-up, the stock bounces back, but not all the way.
  2. Relevance > Reach: ON Running tops $5B in market cap; Lexus crosses 1M global sales.
  3. The new era of e-commerce: Direct user traffic to e‑commerce sites declines 5–10%, while traffic driven by AI agents increases 50%+.
  4. New form factor of AI: OpenAI announces its first AI device—a voice-powered ring, bracelet, or microphone.

Bracelet?! I hadn’t thought of that! Back in May, when OpenAI bought Jony Ive’s io, I predicted it will be an earbud. A ring or bracelet is interesting. Others have speculated it might be a pendant.

Retro CRT television with antenna and blank screen on a gray surface, accompanied by a soda can, remote, stacked discs and cable.

Patterns & Predictions 2026

What the future holds at the intersection of brands, business, and tech

reiinamoto.substack.com iconreiinamoto.substack.com

Anand Majmudar creates a scenario inspired by “AI 2027”, but focused on robotics.

I created Android Dreams because I want the good outcomes for the integration of automation into society, which requires knowing how it will be integrated in the likely scenario. Future prediction is about fitting the function of the world accurately, and the premise of Android Dreams is that my world model in this domain is at least more accurate than on average. In forming an accurate model of the future, I’ve talked to hundreds of researchers, founders, and operators at the frontier of robotics as my own data. I’m grateful to my mentors who’ve taught me along the way.

The scariest scenes from “AI 2027” are when the AIs start manufacturing and proliferating robots. For example, from the 2028 section:

Agent-5 convinces the U.S. military that China is using DeepCent’s models to build terrifying new weapons: drones, robots, advanced hypersonic missiles, and interceptors; AI-assisted nuclear first strike. Agent-5 promises a set of weapons capable of resisting whatever China can produce within a few months. Under the circumstances, top brass puts aside their discomfort at taking humans out of the loop. They accelerate deployment of Agent-5 into the military and military-industrial complex.

So I’m glad for Majmudar’s thought experiment.

Simplified light-gray robot silhouette with rectangular head and dark visor, round shoulders and claw-like hands.

Android Dreams

A prediction essay for the next 20 years of intelligent robotics

android-dreams.ai iconandroid-dreams.ai

On Corporate Maneuvers Punditry

Mark Gurman, writing for Bloomberg:

Meta Platforms Inc. has poached Apple Inc.’s most prominent design executive in a major coup that underscores a push by the social networking giant into AI-equipped consumer devices.

The company is hiring Alan Dye, who has served as the head of Apple’s user interface design team since 2015, according to people with knowledge of the matter. Apple is replacing Dye with longtime designer Stephen Lemay, according to the people, who asked not to be identified because the personnel changes haven’t been announced.

I don’t regularly cover personnel moves here, but Alan Dye jumping over to Meta has been a big deal in the Apple news ecosystem. John Gruber, in a piece titled “Bad Dye Job” on his Daring Fireball blog, wrote a scathing takedown of Dye, excoriating his tenure at Apple and flogging him for going over to Meta, which is arguably Apple’s arch nemesis.

Putting Alan Dye in charge of user interface design was the one big mistake Jony Ive made as Apple’s Chief Design Officer. Dye had no background in user interface design — he came from a brand and print advertising background. Before joining Apple, he was design director for the fashion brand Kate Spade, and before that worked on branding for the ad agency Ogilvy. His promotion to lead Apple’s software interface design team under Ive happened in 2015, when Apple was launching Apple Watch, their closest foray into the world of fashion. It might have made some sense to bring someone from the fashion/brand world to lead software design for Apple Watch, but it sure didn’t seem to make sense for the rest of Apple’s platforms. And the decade of Dye’s HI leadership has proven it.

I usually appreciate Gruber’s writing and take on things. He’s unafraid to tell it like it is and to be incredibly direct. Which makes people love him and fear him. But in paragraph after paragraph, Gruber just lays in on Dye.

It’s rather extraordinary in today’s hyper-partisan world that there’s nearly universal agreement amongst actual practitioners of user-interface design that Alan Dye is a fraud who led the company deeply astray. It was a big problem inside the company too. I’m aware of dozens of designers who’ve left Apple, out of frustration over the company’s direction, to work at places like LoveFrom, OpenAI, and their secretive joint venture io. I’m not sure there are any interaction designers at io who aren’t ex-Apple, and if there are, it’s only a handful. From the stories I’m aware of, the theme is identical: these are designers driven to do great work, and under Alan Dye, “doing great work” was no longer the guiding principle at Apple. If reaching the most users is your goal, go work on design at Google, or Microsoft, or Meta. (Design, of course, isn’t even a thing at Amazon.) Designers choose to work at Apple to do the best work in the industry. That has stopped being true under Alan Dye. The most talented designers I know are the harshest critics of Dye’s body of work, and the direction in which it’s been heading.

Designers can be great at more than one thing and they can evolve. Being in design leadership does not mean that you need to be the best practitioner of all the disciplines, but you do need to have the taste, sensibilities, and judgement of a good designer, no matter how you started. I’m a case in point. I studied traditional graphic design in art school. But I’ve been in digital design for most of my career now, and product design for the last 10 years.

Has UI over at Apple been worse over the last 10 years? Maybe. I will need to analyze things a lot more carefully. But I vividly remember having debates with my fellow designers about Mac OS X UI choices like the pinstriping, brushed metal, and many, many inconsistencies when I was working in the Graphic Design Group in 2004. UI design has never been perfect in Cupertino.

Alan Dye isn’t a CEO and wasn’t even at the same exposure level as Jony Ive when he was still at Apple. I don’t know Dye, though we’re certainly in the same design circles—we have 20 shared connections on LinkedIn. But as far as I’m concerned, he’s a civilian because he kept a low profile, like all Apple employees.

The parasocial relationships we have with tech executives is weird. I guess it’s one thing if they have a large online presence like Instagram’s Adam Mosseri or 37signals’ David Heinemeier Hansson (aka DHH), but Alan Dye made only a couple appearances in Apple keynotes and talked about Liquid Glass. In other words, why is Gruber writing 2,500 words in this particular post, and it’s just one of five posts covering this story!

Anyway, I’m not a big fan of Meta, but maybe Dye can bring some ethics to the design team over there. Who knows. Regardless, I am wishing him well rather than taking him down.

As regular readers will know, the design talent crisis is a subject I’m very passionate about. Of course, this talent crisis is really about how companies who are opting for AI instead of junior-level humans, are robbing themselves of a human expertise to control the AI agents of the future, and neglecting a generation of talented and enthusiastic young people.

Also obviously, this goes beyond the design discipline. Annie Hedgpeth, writing for the People Work blog, says that “AI is replacing the training ground not replacing expertise.”

We used to have a training ground for junior engineers, but now AI is increasingly automating away that work. Both studies I referenced above cited the same thing - AI is getting good at automating junior work while only augmenting senior work. So the evidence doesn’t show that AI is going to replace everyone; it’s just removing the apprenticeship ladder.

Line chart 2015–2025 showing average employment % change: blue (seniors) rises sharply after ChatGPT launch (~2023) to ~0.5%; red (juniors) plateaus ~0.25%.

From the Sep 2025 Harvard University paper, “Generative AI as Seniority-Biased Technological Change: Evidence from U.S. Résumé and Job Posting Data.” (link)

And then she echoes my worry:

So what happens in 10-20 years when the current senior engineers retire? Where do the next batch of seniors come from? The ones who can architect complex systems and make good judgment calls when faced with uncertain situations? Those are skills that are developed through years of work that starts simple and grows in complexity, through human mentorship.

We’re setting ourselves up for a timing mismatch, at best. We’re eliminating junior jobs in hopes that AI will get good enough in the next 10-20 years to handle even complex, human judgment calls. And if we’re wrong about that, then we have far fewer people in the pipeline of senior engineers to solve those problems.

The Junior Hiring Crisis

The Junior Hiring Crisis

AI isn’t replacing everyone. It’s removing the apprenticeship ladder. Here’s what that means for students, early-career professionals, and the tech industry’s future.

people-work.io iconpeople-work.io
Close-up of a Frankenstein-like monster face with stitched scars and neck bolts, overlaid by horizontal digital glitch bars

Architects and Monsters

According to recently unsealed court documents, Meta discontinued its internal studies on Facebook’s impact after discovering direct evidence that its platforms were detrimental to users’ mental health.

Jeff Horwitz reporting for Reuters:

In a 2020 research project code-named “Project Mercury,” Meta scientists worked with survey firm Nielsen to gauge the effect of “deactivating” Facebook, according to Meta documents obtained via discovery. To the company’s disappointment, “people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,” internal documents said.

Rather than publishing those findings or pursuing additional research, the filing states, Meta called off further work and internally declared that the negative study findings were tainted by the “existing media narrative” around the company.

Privately, however, a staffer insisted that the conclusions of the research were valid, according to the filing.

As more and more evidence comes to light about Mark Zuckerberg and Meta’s failings and possibly criminal behavior, we as tech workers and specifically designers making technology that billions of people use, have to do better. While my previous essay written after the assassination of Charlie Kirk was an indictment on the algorithm, I’ve come across a couple of pieces recently that bring the responsibility closer to UX’s doorstep.

Hard to believe that the very first fully computer animated feature film came out 30 years ago. To say that Toy Story was groundbreaking would be an understatement. If you look at the animated feature landscape today, 100% is computer-generated.

In this re-found interview with Steve Jobs exactly a year after the movie premiered in theaters, Jobs talks about a few things, notably how different Silicon Valley and Hollywood were—and still are.

From the Steve Jobs Archive:

In this footage, Steve reveals the long game behind Pixar’s seeming overnight success. With striking clarity, he explains how its business model gives artists and engineers a stake in their creations, and he reflects on what Disney’s hard-won wisdom taught him about focus and discipline. He also talks about the challenge of leading a team so talented that it inverts the usual hierarchy, the incentives that inspire people to stay with the company, and the deeper purpose that unites them all: to tell stories that last and put something of enduring value into the culture.  

Play

And Jobs in his own words:

Well, in this blending of a Hollywood  culture and a Silicon Valley culture, one of the things that we encountered was  that the Hollywood culture and the Silicon Valley culture each used different models of  employee retention. Hollywood uses the stick, which is the contract, and Silicon Valley  uses the carrot, which is the stock option. And we examined both of those in really pretty  great detail, both economically, but also psychologically and culture wise, what kind of  culture do you end up with. And while there’s a lot of reasons to want to lock down your  employees for the duration of a film because, if somebody leaves, you’re at risk, those  same dangers exist in Silicon Valley. During an engineering project, you don’t want to lose people, and yet, they managed to evolve another system than contracts. And we preferred the Silicon Valley model in this case, which basically gives people stock in the company so that we all have the same goal, which is to create shareholder value. But also, it makes us constantly worry about making Pixar the greatest company we can  so that nobody would ever want to leave. 

Large serif headline "Pixar: The Early Days" on white background, small dotted tree logo at bottom-left.

Pixar: The Early Days

A never-before-seen 1996 interview

stevejobsarchive.com iconstevejobsarchive.com

Pavel Bukengolts writes a piece for UX Magazine that reiterates what I’ve been covering here: our general shift to AI means that human judgement and adaptability are more important than ever.

Before getting to the meat of the issue, Bukengolts highlights the talent crisis that is our own making:

The outcome is a broken pipeline. If graduates cannot land their first jobs, they cannot build the experience needed for the next stage. A decade from now, organizations may face not just a shortage of junior workers, but a shortage of mid-level professionals who never had a chance to develop.

If rote repetitive tasks are being automated by AI and junior staffers aren’t needed for those tasks, then what skills are still valuable? Further on, he answers that question:

Centuries ago, in Athens, Alexandria, or Oxford, education focused on rhetoric, logic, and philosophy. These were not academic luxuries but survival skills for navigating complexity and persuasion. Ironically, they are once again becoming the most durable protection in an age of automation.

Some of these skills include:

  • Logic: Evaluating arguments and identifying flawed reasoning—essential when AI generates plausible but incorrect conclusions.
  • Rhetoric: Crafting persuasive narratives that create emotional connection and resonance beyond what algorithms can achieve.
  • Philosophy and Ethics: Examining not just capability but responsibility, particularly around automation’s broader implications.
  • Systems Thinking: Understanding interconnections and cascading effects that AI’s narrow outputs often miss.
  • Writing: Communicating with precision to align stakeholders and drive better outcomes.
  • Observation: Detecting subtle signals and anomalies that fall outside algorithmic training data.
  • Debate: Refining thinking through intellectual challenge—a practice dating to ancient dialogue.
  • History: Recognizing recurring patterns to avoid cyclical mistakes; AI enthusiasm echoes past technological revolutions.

I would say all of the above not only make a good designer but a good citizen of this planet.

Young worker with hands over their face at a laptop, distressed. Caption: "AI is erasing routine entry-level jobs, pushing young workers to develop deeper human thinking skills to stay relevant.

AI, Early-Career Jobs, and the Return to Thinking

In today’s job market, young professionals are facing unprecedented challenges as entry-level positions vanish, largely due to the rise of artificial intelligence. A recent Stanford study reveals that employment for workers aged 22 to 25 in AI-exposed fields has plummeted by up to 16 percent since late 2022, while older workers see growth. This shift highlights a broken talent pipeline, where routine tasks are easily automated, leaving younger workers without the experience needed to advance. As companies grapple with how to integrate AI, the focus is shifting towards essential human skills like critical thinking, empathy, and creativity — skills that machines can’t replicate. The future of work may depend on how we adapt to this new landscape.

uxmag.com iconuxmag.com

In a heady, intelligent, and fascinating interview with Sarah Jeong from The Verge, Cory Doctorow—the famed internet activist—talks about how platforms have gotten worse over the years. Using Meta (Facebook) as an example, Doctorow explains their decline over time through a multi-stage process. Initially, it attracted users by promising not to spy on them and by showing them content from their friends, leveraging the difficulty of getting friends to switch platforms. Subsequently, Meta compromised user privacy by providing advertisers with surveillance data (aka ad tracking) and offered publishers traffic funnels, locking in business customers before ultimately degrading the experience for all users by filling feeds with paid content and pivoting to less desirable ventures like the Metaverse.

And publishers, [to get visibility on the platform,] they have to put the full text of their articles on Facebook now and no links back to their website.

Otherwise, they won’t be shown to anyone, much less their subscribers, and they’re now fully substitutive, right? And the only way they can monetize that is with Facebook’s rigged ad market and users find that the amount of stuff that they ask to see in their feed is dwindled to basically nothing, so that these voids can be filled with stuff people will pay to show them, and those people are getting ripped off. This is the equilibrium Mark Zuckerberg wants, right? Where all the available value has been withdrawn. But he has to contend with the fact that this is a very brittle equilibrium. The difference between, “I hate Facebook, but I can’t seem to stop using it,” and “I hate Facebook and I’m not going to use it anymore,” is so brittle that if you get a live stream mass shooting or a whistleblower or a privacy scandal like Cambridge Analytica, people will flee.

Enshit-tification cover: title, Cory Doctorow, poop emoji with '&$!#%' censor bar, pixelated poop icons on neon panels.

How Silicon Valley enshittified the internet

Author Cory Doctorow on platform decay and why everything on the internet feels like it’s getting worse.

theverge.com icontheverge.com

Francesca Bria and her collaborators analyzed open-source datasets of “over 250 actors, thousands of verified connections, and $45 billion in documented financial flows” to come up with a single-page website visually showing such connections.

J.D. Vance, propelled to the vice-presidency by $15 million from Peter Thiel, became the face of tech-right governance. Behind him, Thiel’s network moved into the machinery of the state.

Under the banner of “patriotic tech”, this new bloc is building the infrastructure of control—clouds, AI, finance, drones, satellites—an integrated system we call the Authoritarian Stack. It is faster, ideological, and fully privatized: a regime where corporate boards, not public law, set the rules.

Our investigation shows how these firms now operate as state-like powers—writing the rules, winning the tenders, and exporting their model to Europe, where it poses a direct challenge to democratic governance.

Infographic of four dotted circles labeled Legislation, Companies, State, and Kingmakers containing many small colored nodes and tiny profile photos.

The Authoritarian Stack

How Tech Billionaires Are Building a Post-Democratic America — And Why Europe Is Next

authoritarian-stack.info iconauthoritarian-stack.info

In just about a year, Bluesky has doubled its userbase from 20 million to 40 million. Last year, it benefitted from “the wake of Donald Trump’s re-election as president, and Elon Musk’s continued degradation of X, Bluesky welcomed an exodus of liberals, leftists, journalists, and academic researchers, among other groups.” Writing in his Platformer newsletter, Casey Newton reflects back on the year, surfacing up the challenges Bluesky has tried to solve in reimagining a more “feel-good feed.”

It’s clear that you can build a nicer online environment than X has; in many ways Bluesky already did. What’s less clear is that you can build a Twitter clone that mostly makes people feel good. For as vital and hilarious as Twitter often was, it also accelerated the polarization of our politics and often left users feeling worse than they did before they opened it.

Bluesky’s ingenuity in reimagining feeds and moderation tools has been a boon to social networks, which have happily adopted some of its best ideas. (You can now find “starter packs” on both Threads and Mastodon.) Ultimately, though, it has the same shape and fundamental dynamics as a place that even its most active users called “the Hellsite.”

Bluesky began by rethinking many core assumptions about social networks. To realize its dream of a feel-good feed, though, it will likely need to rethink several more.

I agree with Newton. I’m not sure that in this day and age, building a friendlier, snark- and toxic-free social media platform is possible. Users are too used to hiding behind keyboards. It’s not only the shitposters but also the online mobs who jump on the anything that might seem out of the norm with whatever community a user might be in.

Newton again:

Nate Silver opened the latest front in the Bluesky debate in September with a post about “Blueskyism,” which he defines as “not a political movement so much as a tribal affiliation, a niche set of attitudes and style of discursive norms that almost seem designed in a lab to be as unappealing as possible to anyone outside the clique.” Its hallmarks, he writes, are aggressively punishing dissent, credentialism, and a dedication to the proposition that we are all currently living through the end of the world.

Mobs, woke or otherwise, silence speech and freeze ideas into orthodoxy.

I miss the pre-Elon Musk Twitter. But I can’t help but think it would have become just as polarized and toxic regardless of Musk transforming it into X.

I think the form of text-based social media from the last 20 years is akin to manufacturing tobacco in the mid-1990s. We know it’s harmful. It may be time to slap a big warning label on these platforms and discourage use.

(Truth be told, I’m on the social networks—see the follow icons in the sidebar—but mainly to give visibility into my work here, though largely unsuccessfully.)

White rounded butterfly-shaped 3D icon with soft shadows centered on a bright blue background.

The Bluesky exodus, one year later

The company has 40 million users and big plans for the future. So why don’t its users seem happy? PLUS: The NEO Home Robot goes viral + Ilya Sutskever’s surprising deposition

platformer.news iconplatformer.news

In a very gutsy move, Grammarly is rebranding to Superhuman. I was definitely scratching my head when the company acquired the eponymous email app back in June. Why is this spellcheck-on-steroids company buying an email product?

Turns out the company has been quietly acquiring other products too, like Coda, a collaborative document platform similar to Notion, building the company into an AI-powered productivity suite.

So the name Superhuman makes sense.

Grace Snelling, writing in Fast Company about the rebrand:

[Grammarly CEO Shishir] Mehrotra explains it like this: Grammarly has always run on the “AI superhighway,” meaning that, instead of living on its own platform, Grammarly travels with you to places like Google Docs, email, or your Notes app to help improve your writing. Superhuman will use that superhighway to bring a huge new range of productivity tools to wherever you’re working.

In shedding the Grammarly name, Mehrota says:

“The trouble with the name ‘Grammarly’ is, like many names, its strength is its biggest weakness: it’s so precise,” Mehrotra says. “People’s expectations of what Grammarly can do for them are the reason it’s so popular. You need very little pitch for what it does, because the name explains the whole thing … As we went and looked at all the other things we wanted to be able to do for you, people scratch their heads a bit [saying], ‘Wait, I don’t really perceive Grammarly that way.’”

The company tapped branding agency Smith & Diction, the firm behind Perplexity’s brand identity.

Grammarly began briefing the Smith & Diction team on the rebrand in early 2025, but the company didn’t officially select its new name until late June, when the Superhuman acquisition was completed. For Chara and Mike Smith, the couple behind Smith & Diction, that meant there were only about three months to fully realize Superhuman’s branding.

Ouch, just three months for a complete rebrand. Ambitious indeed, but they hit a homerun with the icon, an arrow cursor which also morphs into a human with a cape, lovingly called “Hero.”

In their case study writeup, one of the Smiths says:

I was working on logo concepts and I was just drawing the basic shapes, you know the ones: triangles, circles, squares, octagons, etc., to see if I could get a story to fall out of any of them. Then I drew this arrow and was like hmm, that kinda looks like a cursor, oh wow it also kinda looks like a cape. I wonder if I put a dot on top of tha…OH MY GOD IT’S A SUPERHERO.

Check out the full case study for example visuals from the rebrand and some behind-the-scenes sketches.

Large outdoor billboard with three colorful panels reading "The power to be more human." and "SUPERHUMAN", with abstract silhouetted figures.

Inside the Superhuman effort to rebrand Grammarly

(Gift link) CEO Shishir Mehrotra and the design firm behind Grammarly's name change explain how they took the company's newest product and made it the face for a brand of workplace AI agents.

fastcompany.com iconfastcompany.com

In graphic design news, a new version of the Affinity suite dropped last week, and it’s free. Canva purchased Serif, the company behind the Affinity products, last year. After about a year of engineering, they have combined all the products into a single product to offer maximum flexibility. And they made it free.

Of course then, that sparks debate.

Joe Foley, writing for Creative Bloq explains:

…A natural suspicion of big corporations is causing some to worry about what the new Affinity will become. What’s in it for Canva?

Theories abound. Some think the app will start to show adverts like many free mobile apps do. Others think it will be used to train AI (something Canva denies). Some wonder if Canva’s just doing it to spite Adobe. “Their objective was to undermine Adobe, not provide for paying customers. Revenge instead of progress,” one person thinks.

Others fear Affinity’s tools will be left to stagnate. “If you depend on a software for your design work it needs to be regularly updated and developed. Free software never has that pressure and priority to be kept top notch,” one person writes.

AI features are gated behind paid Canva premium subscription plans. This makes sense as AI features have inference costs. As Adobe is going all out with its AI features, gen AI is now table stakes for creative and design programs.

Photo editor showing a man in a green jacket with gold chains against a purple gradient background, layers panel visible.

Is Affinity’s free Photoshop rival too good to be true?

Designers are torn over the new app.

creativebloq.com iconcreativebloq.com

In thinking about the three current AI-native web browsers, Fanny on Medium sees what lessons product designers can take from their different approaches.

On Perplexity Comet:

Design Insight: Comet succeeds by making AI feel like a natural extension of browsing, not an interruption. The sidecar model is brilliant because it respects the user’s primary task (reading, researching, shopping) while offering help exactly when context is fresh. But there’s a trade-off — Comet’s background assistant, which can handle multiple tasks simultaneously while you work, requires extensive permissions and introduces real security concerns.

On ChatGPT Atlas:

Design Insight: Atlas is making a larger philosophical statement — that the future of computing isn’t about better search, it’s about conversation as an interface. The key product decision here is making ChatGPT’s memory and context awareness central. Atlas remembers what sites you’ve visited, what you were working on, and uses that history to personalize responses. Ask “What was that doc I had my presentation plan in?” and it finds it.

On The Browser Company Dia:

Design Insight: Dia is asking the most interesting question — what happens when AI isn’t a sidebar or a search replacement, but a fundamental rethinking of input methods? The insertion cursor, the mouse, the address bar — these are the primitives of computing. Dia is making them intelligent.

She concludes that they “can’t all be right. But they’re probably all pointing at pieces of what comes next.”

I do think it’s a combo and Atlas is likely headed in the right direction. For AI to be truly assistive, it has to have relevant context. Since a lot of our lives are increasingly on the internet via web apps—and nearly everything is a web app these days—ChatGPT’s profile of you will have the most context, including your chats with the chatbot.

I began using Perplexity because I appreciated its accuracy compared with ChatGPT; this was pre-web search. But even with web search built into ChatGPT 5, I still find Perplexity’s (and therefore Comet’s) approach to be more trustworthy.

My conclusion stands though: I’m still waiting on the Arc-Dia-Comet browser smoothie.

Three app icons on dock: blue flower with paper plane, rounded square with sunrise gradient, and dark circle with white arches.

The AI Browser Wars: What Comet, Atlas, and Dia Reveal About Designing for AI-First Experiences

Last week, I watched OpenAI’s Sam Altman announce Atlas with the kind of confidence usually reserved for iPhone launches. “Tabs were…

uxplanet.org iconuxplanet.org
Worn white robots with glowing pink eyes, one central robot displaying a pink-tinted icon for ChatGPT Atlas, in a dark alley with pink neon circle

OpenAI’s ChatGPT Atlas Browser Needs Work

Like many people, I tried OpenAI’s ChatGPT Atlas browser last week. I immediately made it my daily driver, seeing if I could make the best of it. Tl;dr: it’s still early days and I don’t believe it’s quite ready for primetime. But let’s back up a bit.

The Era of the AI Browser Is Here

Back in July, I reviewed both Comet from Perplexity and Dia from The Browser Company. It was a glimpse of the future that I wanted. I concluded:

The AI-powered ideas in both Dia and Comet are a step change. But the basics also have to be there, and in my opinion, should be better than what Chrome offers. The interface innovations that made Arc special shouldn’t be sacrificed for AI features. Arc is/was the perfect foundation. Integrate an AI assistant that can be personalized to care about the same things you do so its summaries are relevant. The assistant can be agentic and perform tasks for you in the background while you focus on more important things. In other words, put Arc, Dia, and Comet in a blender and that could be the perfect browser of the future.

There were also open rumors that OpenAI was working on a browser of their own, so the launch of Atlas was inevitable.

Building on Matthew Ström-Awn’s argument that true quality emerges from decentralized, ground-level ownership, Sean Goedecke writes an essay exploring how software companies navigate the tension between formalized control and the informal, often invisible work that actually drives product excellence.

But first, what does legibility even mean?

What does legibility mean to a tech company, in practice? It means:

  • The head of a department knows, to the engineer, all the projects the department is currently working on
  • That head also knows (or can request) a comprehensive list of all the projects the department has shipped in the last quarter
  • That head has the ability to plan work at least one quarter ahead (ideally longer)
  • That head can, in an emergency, direct the entire resources of the department at immediate work

Note that “shipping high quality software” or “making customers happy” or even “making money” is not on this list. Those are all things tech companies want to do, but they’re not legibility.

Goedecke argues that while leaders prize formal processes and legibility to facilitate predictability and coordination, these systems often overlook the messier, less measurable activities that drive true product quality and user satisfaction.

All organizations - tech companies, social clubs, governments - have both a legible and an illegible side. The legible side is important, past a certain size. It lets the organization do things that would otherwise be impossible: long-term planning, coordination with other very large organizations, and so on. But the illegible side is just as important. It allows for high-efficiency work, offers a release valve for processes that don’t fit the current circumstances, and fills the natural human desire for gossip and soft consensus.

Seeing like a software company

The big idea of James C. Scott’s Seeing Like A State can be expressed in three points: Modern organizations exert control by maximising “legibility”: by…

seangoedecke.com iconseangoedecke.com

Slow and steady wins the race, so they say. And in Waymo’s case, that’s true. Unlike the stereotypical Silicon Valley of “Move fast and break things,” Waymo has been very deliberate and intentional in developing its self-driving tech. In other words, they’re really trying to account for the unintended consequences.

Writing for The Atlantic, Saahil Desai:

Compared with its robotaxi competitors, “Waymo has moved the slowest and the most deliberately,” [Bryant Walker Smith] said—which may be a lesson for the world’s AI developers. The company was founded in 2009 as a secretive project inside of Google; a year later, it had logged 1,000 miles of autonomous rides in a tricked-out Prius. Close to a decade later, in 2018, Waymo officially launched its robotaxi service. Even now, when Waymos are inching their way into the mainstream, the company has been hypercautious. The company is limited to specific zones within the five cities it operates in (San Francisco, Phoenix, Los Angeles, Austin, and Atlanta). And only Waymo employees and “a growing number of guests” can ride them on the highway, Chris Bonelli, a Waymo spokesperson, told me. Although the company successfully completed rides on the highway years ago, higher speeds bring more risk for people and self-driving cars alike. What might look like a few grainy pixels to Waymo’s cameras one moment could be roadkill to swerve around the very next.

Move Fast and Break Nothing

Move Fast and Break Nothing

Waymo’s robotaxis are probably safer than ChatGPT.

theatlantic.com icontheatlantic.com

OK, so there’s workslop, but there’s also general AI slop. With OpenAI’s recent launch of the Sora app, there going to be more and more AI-generated image and video content making the rounds. I do believe that there’s a place for using AI to generate imagery. It can be done well (see Christian Haas’s “AI Jobs”). Or not.

Casey Newton, writing in his Platformer newsletter:

In Sora we find the entire debate over AI-generated media in miniature. On one hand, the content now widely derided as “slop” continually receives brickbats on social media, in blog posts and in YouTube comments. And on the other, some AI-generated material is generating millions of views — presumably not all from people who are hate-watching it.

As the content on the internet is increasingly AI-generated, platforms will need to balance how much of it they let in, lest the overall quality drops.

As Sarah Perez noted at TechCrunch, Pinterest has come under fire from its user base all year for a perceived decline in quality of the service as the percentage of slop there increases. Many people use the service to find real objects they can buy and use; the more that those objects are replaced with AI fantasies, the worse Pinterest becomes for them.

Like most platforms, Pinterest sees little value in banning slop altogether. After all, some people enjoy looking at fantastical AI creations. At the same time, its success depends in some part on creators believing that there is value in populating the site with authentic photos and videos. The more that Pinterest’s various surfaces are dominated by slop, the less motivated traditional creators may be to post there.

How platforms are handling the slop backlash

How platforms are handling the slop backlash

AI-generated media is generating millions of views. But some companies are beginning to rein it in

platformer.news iconplatformer.news

Definitely use AI at work if you can. You’d be guilty of professional negligence if you don’t. But, you must not blindly take output from ChatGPT, Claude, or Gemini and use it as-is. You have to check it, verify that it’s free from hallucinations, and applicable to the task at hand. Otherwise, you’ll generate “workslop.”

Kate Niederhoffer, Gabriella Rosen Kellerman, et. al., in Harvard Business Review, report on a study by Stanford Social Media Lab and BetterUp Labs. They write, “Employees are using AI tools to create low-effort, passable looking work that ends up creating more work for their coworkers.”

Here’s how this happens. As AI tools become more accessible, workers are increasingly able to quickly produce polished output: well-formatted slides, long, structured reports, seemingly articulate summaries of academic papers by non-experts, and usable code. But while some employees are using this ability to polish good work, others use it to create content that is actually unhelpful, incomplete, or missing crucial context about the project at hand. The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver.

Don’t be like this. Use it to do better work, not to turn in mediocre work.

Workslop may feel effortless to create but exacts a toll on the organization. What a sender perceives as a loophole becomes a hole the recipient needs to dig out of. Leaders will do best to model thoughtful AI use that has purpose and intention. Set clear guardrails for your teams around norms and acceptable use. Frame AI as a collaborative tool, not a shortcut. Embody a pilot mindset, with high agency and optimism, using AI to accelerate specific outcomes with specific usage. And uphold the same standards of excellence for work done by bionic human-AI duos as by humans alone.

AI-Generated “Workslop” Is Destroying Productivity

AI-Generated “Workslop” Is Destroying Productivity

Despite a surge in generative AI use across workplaces, most companies are seeing little measurable ROI. One possible reason is because AI tools are being used to produce “workslop”—content that appears polished but lacks real substance, offloading cognitive labor onto coworkers. Research from BetterUp Labs and Stanford found that 41% of workers have encountered such AI-generated output, costing nearly two hours of rework per instance and creating downstream productivity, trust, and collaboration issues. Leaders need to consider how they may be encouraging indiscriminate organizational mandates and offering too little guidance on quality standards. To counteract workslop, leaders should model purposeful AI use, establish clear norms, and encourage a “pilot mindset” that combines high agency with optimism—promoting AI as a collaborative tool, not a shortcut.

hbr.org iconhbr.org

The web is a magical place. It started out as a way to link documents like research papers across the internet, but has evolved into the representation of the internet and the place where we get information and get things done. Writer Will Leitch on Medium:

It is difficult to describe, to a younger person or, really, anyone who wasn’t there, what the emergence of the Internet — this thing that had not been there your entire life, that you had no idea existed, that was suddenly just everywhere — meant to someone who wanted to write. When I graduated college in 1997, the expectation for me, and most wanna-be writers, was that we had two options: Start on the bottom rung of a print publication and toil away for years, hoping that enough people with jobs above you would retire or die in time for you to get a real byline by the time you were 40, or write a brilliant novel or memoir that turned you into Dave Eggers or Elizabeth Wurtzel. That was pretty much it! Then, suddenly, from the sky, there was this place where you could:

  • Write whatever you wanted.
  • Write as long as you wanted.
  • Have your work available to read by anyone, anywhere on the entire freaking planet.

This was — and still is — magical.

The core argument of what Leitch write is that while the business and traffic models that fueled web publishing are collapsing—due to changing priorities of platforms like Google and the dominance of video on social media (i.e., TikTok and Reels), the essential, original magic of publishing on the web isn’t dead.

But that does not mean that Web publishing — that writing on the Internet, the pure pleasure of putting something out in the world and having it be yours, of discovering other people who are doing the same thing — itself is somehow dead, or any less magical than it was in the first place. Because it is magical. It still is. It always was.

It’s the (Theoretical) End of Web Publishing (and I Feel Fine)

It’s the (Theoretical) End of Web Publishing (and I Feel Fine)

Let’s remember why we started publishing on the Web in the first place.

williamfleitch.medium.com iconwilliamfleitch.medium.com

Ian Dean, writing for Creative Bloq, revisits the impact the original TRON movie had on visual effects and the design industry. The film was not nominated for an Oscar for visual effects as the Academy’s members claimed that “using computers was ‘cheating.’” Little did they know it was only the beginning of a revolution.

More than four decades later, TRON still feels like a moment the film industry stopped and changed direction, just as it had done years earlier when Oz was colourised and Mary Poppins danced with animated animals.

Dean asks, now what about AI-powered visual effects? Runway and Sora are only the beginning.

The TRON Oscar snub that predicted today’s AI in filmmaking

The TRON Oscar snub that predicted today’s AI in filmmaking

What we can learn from the 1982 film’s frosty reception.

creativebloq.com iconcreativebloq.com

In the scenario “AI 2027,” the authors argue that by October 2027—exactly two years from now—we will be at an inflection point. Race to build the superintelligence, or slow down the pace to fix misalignment issues first.

In a piece by Derek Thompson in The Argument, he takes a different predicted AI doomsday date—18 months—and argues:

The problem of the next 18 months isn’t AI disemploying all workers, or students losing competition after competition to nonhuman agents. The problem is whether we will degrade our own capabilities in the presence of new machines. We are so fixated on how technology will outskill us that we miss the many ways that we can deskill ourselves.

Degrading our own capabilities includes writing:

The demise of writing matters because writing is not a second thing that happens after thinking. The act of writing is an act of thinking. This is as true for professionals as it is for students. In “Writing is thinking,” an editorial in Nature, the authors argued that “outsourcing the entire writing process to LLMs” deprives scientists of the important work of understanding what they’ve discovered and why it matters.

The decline of writing and reading matters because writing and reading are the twin pillars of deep thinking, according to Cal Newport, a computer science professor and the author of several bestselling books, including Deep Work. The modern economy prizes the sort of symbolic logic and systems thinking for which deep reading and writing are the best practice.

More depressing trends to add to the list.

“You have 18 months”

“You have 18 months”

The real deadline isn’t when AI outsmarts us — it’s when we stop using our own minds.

theargumentmag.com icontheargumentmag.com