Skip to content

141 posts tagged with “tech industry”

The terminal’s return as a serious surface for new tools (Claude Code, Codex, Omarchy) has mostly been read as a developer aesthetic story. Alcides Fonseca reads it as the receipt for thirty years of GUI toolkit churn. He walks the platforms one by one (Windows, Linux, macOS), then through Electron, then through the failed restarts (Google’s Flutter UI, Zed’s GPUI), and ends on TUIs as the place developers go when none of the layers above hold up.

Fonseca on macOS:

Apple used to be a one-book religion. Apple’s Human Interface Guidelines used to be cited by every User Interface course over the world. Xerox PARC and Apple were the two institutions that studied what it means to have a good human interface. Fast forward a few decades, and Apple is doing the best worst it can to break all the guidelines and consistency it was known for.

This isn’t a nostalgia complaint. Fonseca lists the live breaks (Fitts’ law getting ignored, the Tahoe window-resizing saga that didn’t stay fixed, the icons cluttering Apple menus) and treats them as the same class of failure as Microsoft’s WinForms-WPF-Silverlight-WinUI-MAUI parade. The mechanism differs but the outcome is the same: the platform stops being a place a designer can rely on.

Fonseca on Electron:

Looking at my dock, I have 8 native apps (text mate and macOS system utilities) and 6 electron apps (Slack, Discord, Mattermost, VScode, Cursor, Plexampp). And that’s from someone who really wishes he could avoid having any electron app at all. […] These are actions that should be the same across every macOS application, and even if there are shortcuts, they are not announced in the menus.

The dock count is the right way to measure it. RAM is the visible cost of Electron; the invisible cost is that every Electron app becomes its own little keyboard regime, with shortcuts that often don’t match the rest of the system and aren’t announced in menus when they do exist. Fonseca’s Cursor example (can you keyboard from the agent panel to the agent list and archive an item) is the kind of question any pre-Electron Mac app would have answered yes to. Most Electron apps answer maybe, with a shortcut their vendor invented.

His prescription that follows (make HCI mandatory in CS curricula, fail student projects with bad UIs, push OS vendors to invest in toolkits developers want to use) is correct in shape and probably wrong about leverage. Students aren’t the bottleneck. Apple and Microsoft have already read Norman. TUIs are back because the platforms quit, and the curriculum can’t fix that.

Fonseca’s diagnosis is right. The prescription is narrower. The TUI escape hatch works for developers because their work is text. Designers don’t get the same exit when the canvas is the medium itself.

Bonus: Speaking of TUIs, TUIStudio is a macOS app for designing terminal UIs, just like Figma!

Linux desktop split between a terminal showing an `ls` directory listing, a lazygit interface with recent commits, and btop system monitor displaying CPU, memory, disk, network, and process stats.

Why TUIs are back

Terminal User Interfaces (TUIs) are making a comeback. DHH’s Omarchy is made of three types of user interfaces: TUIs, for immediate feedback and bonus geek points, webapps because 37signals (his company) sells SAAS web applications and the unavoidable gnome-style native applications that really do not fit well in the style of the distro.

wiki.alcidesfonseca.com iconwiki.alcidesfonseca.com

George Anders, in the Wall Street Journal, makes the case that the 1920s offer a usable template for the AI decade. His strongest evidence is the spillover-jobs data:

By 1930, more than 80,000 people were working as electricians, a profession that hardly existed a decade before. Census data also showed that 168,000 people were working in rubber factories, most of them making tires to accommodate Detroit’s booming production of cars, trucks and buses. Another 450,000 people were building roads, bridges and other structures needed by the ever expanding auto industry.

The ATM parable had the same problem: the version that ends in 2010, with bank-teller employment intact, is the one we love to retell. The version that ends in 2022, with teller jobs cut in half by the iPhone, is the one we leave out. Anders’s 80,000 electricians are real. So is the question of which of them got displaced when the next technology arrived.

Anders does, to his credit, take the costs seriously. He spends a section on the radio fight:

In 1927, H.G. Wells, the British author and intellectual, called radio “inferior” entertainment that should be listened to “only by the sick, the lonely and the suffering.” David Sarnoff, general manager of Radio Corp. of America, shot back that he was trying to improve “the happiness of the nation” by delivering popular music to millions of people. Nearly a century later, that same argument still flares, though now it is more likely to involve TikTok, Reddit or YouTube, instead of dear old radio. The doubters always have a point; with the passage of time, the innovators usually win out.

The early evidence on AI’s job-creation side is thinner than the 1920s comparison flatters: Anthropic’s own researchers find a 14% drop in the job-finding rate for 22-to-25-year-olds in exposed occupations since ChatGPT launched, even as overall unemployment holds. The new electricians of our decade may exist. They just may not be the people getting hired right now.

The safety side of Anders’s case is the one I want to see more of. Cars in 1920 killed at twenty times today’s per-mile rate, and the country chose not to live with that:

Auto safety got better, too, with both industry and government taking action. Better mirrors, better brakes and shatterproof windshields became standard. Cities such as Los Angeles and Detroit installed red-yellow-green traffic lights that governed drivers’ actions on busy streets. New Jersey became the first state to insist on driver’s licenses, with the state’s motor-vehicle commissioner in 1924 declaring: “It is an absolute necessity to do this in order to conserve human life.”

Whether the next century treats our decade as kindly depends on whether we put rearview mirrors and traffic lights on AI before the death rates make us, and whether we do it under the same kind of duress the 1920s did.

Vintage black-and-white photo of an early automobile displayed in a storefront window with bold striped decorations and a sign reading "Auto Show Jan. 19-25 Auditorium Milwaukee.

What the 1920s Can Teach Us About Surviving the AI Revolution

(Gift link) A century ago, cars and radio upended society just as AI is doing today.

wsj.com iconwsj.com

Cat Wu, Anthropic’s Head of Product for Claude Code, describes the hiring filter on her team in her interview with Lenny Rachitsky:

I think all of the roles are merging. PMs are doing some engineering work. Engineers are doing PM work. Designers are PMing and also landing code. You can either hire a lot more engineers who have great product taste, or you can keep your engineering hiring the same and hire a lot more PMs to help guide some of their work. On our team, we’re pretty focused on hiring engineers with great product taste. This way we can reduce the amount of overhead for shipping any product. Like there are many engineers on our team who are fully able to end to end go from see user feedback on Twitter through to like ship a product at the end of the week with almost no product involvement. And this, I think, is actually like the most efficient way to ship something. So I think like engineer and PM are kind of overlapping and you will get a lot of benefit from having more of either. I think product taste is still a very rare skill to have and we’ll pretty much hire anyone who we feel has demonstrated this strongly.

This is what the Full Stack Builder pattern looks like as a hiring filter. The headline is the merging of roles. Wu’s own background says where the bench comes from:

Yeah, I was an engineer for many years. I was then a VC very briefly before joining Anthropic. And actually almost all the PMs on our team have either been engineers or ship code here on Claude Code. And so that’s one of the things that I think helps build trust with the team and also just enables us to move a lot faster. And then actually our designers also have been front-end engineers before.

So to be clear, Wu doesn’t say that the roles have merged, but what she’s describing is the continued blurring of lines.

How Anthropic’s product team moves faster than anyone else | Cat Wu (Head of Product, Claude Code)

Cat Wu is Head of Product for Claude Code and Cowork at Anthropic, building one of the most important AI products of this generation. Before joining Anthropic, Cat spent years as an engineer and briefly worked in VC. Today, she’s interviewing hundreds of product managers who are trying to break…

youtube.com iconyoutube.com

Obviously, I’ve been pro-AI on this blog, actively trying to understand and figure out how it’s affecting UX design and how to use it for leverage instead of being replaced by it. In Silicon Valley and tech companies everywhere, including BuildOps, we’re racing to incorporate AI into our daily work to increase velocity, and adding it to our products to stay relevant.

Nilay Patel, in a Decoder monologue, lays out the polling that should rattle anyone shipping AI products:

There’s that NBC News poll showing AI with worse favorables than ICE, and only a little bit above the war in Iran and Democrats generally. That’s what the nearly two-thirds of respondents saying they’d used ChatGPT or Copilot in the last month. Quinnipiac just found that over half of Americans think AI will do more harm than good. Well, more than 80% of people were either very concerned or somewhat concerned about the technology. Only 35% of people were excited about it. And poll after poll shows that Gen Z uses AI the most and has the most negative feelings about it. A recent Gallup poll found that only 18% of Gen Z was hopeful about AI, down from an already bad 27% last year. At the same time, anger is growing. 31% of those Gen Z respondents said they feel angry about AI, up from 22% last year.

The killer detail is buried halfway through. The Gen Z curve is striking: heaviest users, and yet the fastest to sour. Anger is up nine points in a year. These aren’t non-users reacting to coverage. They’re the daily customers, and the answer is no. Sam Altman has called this AI’s marketing problem. The polling rebuts him: public exposure has grown, public favor has not.

Patel’s title line:

Regular people don’t see the opportunity to write code as an opportunity at all. The people do not yearn for automation. I’m a full-on smart home sicko. The lights and shades and climate controls of this house are automated in dozens of ways, but huge companies like Apple and Google and Amazon have struggled for over a decade now to make regular care about smart home automation, and they just don’t. AI isn’t gonna fix that.

Patel grounds the title in his own smart-home enthusiasm, and the comparison clicks because the failure pattern is identical: decade-plus of effort, billions in marketing, working products, and persistent indifference. Apple, Google, and Amazon ran that experiment. AI will not crack a problem that smart-home automation hasn’t.

John Gruber connects the same dissonance to the Mos Eisley cantina from Star Wars. Luke walks in with C-3PO and R2-D2. The bartender, Wuher, barks: “We don’t serve their kind here. Your droids. They’ll have to wait outside.” Gruber:

As a kid, I didn’t get it. Why would you not want droids? Star Wars made robots seem so real, so fun. Why would you ban them? That scene has stuck with me for my entire life. I didn’t get why, but I understood what it meant about that galaxy: the underclass deeply resented droids.

Gruber leaves the question open. He says he didn’t get why the droids weren’t welcome. The cantina’s animosity wasn’t arbitrary. Mos Eisley sits in the Outer Rim, where droid armies killed millions and occupied worlds during the Clone Wars. After the war, droids became a subjugated worker class across the galaxy, and Outer Rim spots like Mos Eisley held the line hardest. Wuher’s verdict comes from experience.

That’s the parallel for AI. Public distrust is earned. People have lived with AI overviews getting facts wrong and feeds drowning in slop, while every product asks them to bend a little more toward the database. Patel:

And so the tech industry is rushing forward to put AI everywhere at enormous cost, energy, emissions, manufacturing capacity, the ability to buy RAM locked into the narrow framework of software brain, without realizing they are also asking people to be fundamentally less human. And then they’re sitting around, wondering why everyone hates them. I don’t think a couple haircuts are gonna fix it.

As an industry, we need to continue to show the value of AI by being truly useful, not just market it.

THE PEOPLE DO NOT YEARN FOR AUTOMATION

Today on Decoder, I want to lay out an idea that’s been banging around my head for weeks now as we’ve been reporting on AI and having conversations here on this show. I’ve been calling it software brain, and it’s a particular way of seeing the world that fits everything into algorithms, databases…

youtube.com iconyoutube.com

In design circles, the AI debate splits into two responses: principled resistance and principled engagement. Dan Cohen offers a third: historical context.

Writing for Humane Ingenuity, Cohen uses Tracy Kidder’s The Soul of a New Machine—the 1981 Pulitzer-winning account of Data General’s minicomputer team—as a mirror for the current moment. He opens with a scene that reads like a 2026 AI company profile before revealing it’s from 1979:

A crack team of hardware and software engineers, inspired by breakthroughs in computer science and electrical engineering, are driven to work 18-hour days, seven days a week, on a revolutionary new system. The system’s capabilities and speed will usher in a new era, one that will bring transformative computing to every workplace. The long hours are necessary: the team knows that every major computer company sees what they see on the horizon, and they too are working around the clock to take advantage of powerful new chips and innovative information architectures.

The team is almost entirely men, men whose affect and social skills cluster in a rather narrow band, although they are led by a charismatic figure who knows how to persuade both computer engineers and capitalists. This is a helpful skill. Money, big money, is flowing into the sector; soon it will overflow. Engineers are constantly poached by rival companies. Hundreds of new competitors arise to build variations on the same system, or to write software or build hardware that can take advantage of this next wave of computing power. Some just want to repackage what the computer vendors produce, or act as consultants to the companies that adopt these new machines.

Sounds a bit like today’s Silicon Valley 996 culture, but that’s Data General in 1979. The team also worried about the Pentagon weaponizing their machine, job displacement, and whether their work might eventually produce true AI and destroy humanity. Those concerns date to 1979.

Cohen’s argument is about scale: the minicomputer moved millions of companies from paper to digital for the very first time; that was a genuine revolution. AI, he argues, is improving workflows that are already digital. His question: is that the same order of disruption?

Carl Alsing, one of the engineers who built the Eagle, told Kidder when asked about artificial intelligence:

“Artificial intelligence takes you away from your own trip. What you want to do is look at the wheels of the machine and if you like them, have fun.”

Cohen closes with the historical outcome:

In the 1980s, most of the minicomputer companies, launched with such excitement in the late 1970s, failed. Data General was acquired for a fraction of the billions it was once worth. The minicomputer, however, was broadly adopted, was transformative, became routine, and then was surpassed by a new new machine, the personal computer.

Later, Data General’s domain name, DG.com, was sold to a chain of discount stores, Dollar General.

Vintage blue terminal keyboard with numeric keypad, featuring keys labeled NEW LINE, CR, DEL, SHIFT, ENTER, VIEW, ON LINE, and READY/FAULT indicators.

The Role of a New Machine

An old book puts today’s new technology in perspective

newsletter.dancohen.org iconnewsletter.dancohen.org

“Slop cannons” is Darragh Curran’s term for the fear that AI-generated code will degrade craft. The fear is real. The same fear runs through design: AI-generated interfaces will be derivative, generic, indistinguishable from each other. Curran is Intercom’s CTO, and he published a detailed report on what happened when Intercom went agent-first across their entire R&D org. The result: 3x productivity in 16 months, tracked across nine metrics. The code quality results were not what anyone expected.

Curran:

A legitimate worry with the use of coding Agents, is that they won’t write high-quality code and the craft we’ve fought to protect will be undermined by slop cannons. We have a system to rate the structural quality of code contributions using static analysis and various rules/heuristics. It’s clear that prior to agentic coding, this metric would oscillate up and down above the line. As we started to use AI for writing more and more of our code, the overall quality (by this measure) declined. My intuition was that this was inevitable in the short term, but correctable in the medium term, as models and harnesses get better. We are starting to see this and recently had possibly our first ever five-week streak of net positive code quality overall.

Quality did dip. He confirms it. The slop cannon fear describes a real phase: at 93.6% agent-driven PRs, when agent-generated code degrades, the whole org feels it. But there’s a second finding:

There is huge latent potential. Some people are really pushing the limit of what is possible, tokenmaxxing, doing really interesting things, while others have only really made incremental changes to how they’re working and don’t see much change in their personal throughput. Ultimately one of the biggest bottlenecks to progress is with humans; how we work together, how we change behavior, etc.

Intercom’s top 5% of contributors produce 6x the median PR throughput. Those are the people spending over $1,000 a month on tokens. That spread is the real finding from going agent-first. The slop cannon fear is about whether agents can execute well. The 6x gap is about who’s learned to orchestrate them, and Curran’s candid that most of his org is still finding out.

For design, we worry about going too fast, of solving the wrong problem, and building the wrong thing. Those are legitimate fears. Nonetheless, if you’re working in startupland as a designer, acceleration and automation are coming.

Illustrated astronaut standing on a mountain peak planting an orange flag, with text reading "2x: 9 months later – Fin/ideas" on a dark background.

2× – nine months later: We did it

You can too.

ideas.fin.ai iconideas.fin.ai

Ant Murphy opens with an eyebrow-raising McKinsey number:

McKinsey reports that 88% of organisations say they “use AI” but only about 1% have mature AI deployments delivering real value.

Murphy’s explanation for the gap is familiar: the diffusion of innovation, Geoffrey Moore’s chasm between early adopters and the majority, now applied to AI. What’s less common in the AI discourse is a behavioral explanation for why the adoption keeps stalling. Murphy:

AI is personal. It’s not another tool, to some it’s viewed as a replacement. “AI attacks our identity in a way that most software doesn’t” — Vikram Sreekanti

That resistance shows up in the record: a friend’s “I didn’t sign up for this”. Claire Vo described designers as the most resistant to change in the EPD triad, vocal AI opponents with little appetite for campaigning for resources. None of it is irrational. Daniel Kahneman and Amos Tversky found that humans weigh losses about twice as heavily as equivalent gains. Years of accumulated craft become our identity. AI doesn’t ask you to learn new tools; it asks you to renegotiate what made you worth hiring in the first place. The reskilling conversation treats that as a capability problem. Identity problems don’t resolve themselves through training on new tools.

Murphy on what that requires:

Surviving a paradigm shift like this is less about what your product does […] Instead it’s about you adapting to the change.

The 88% are held back by what AI is asking them to let go of. Murphy’s argument is that organizations clearing the chasm are doing the internal work first—on process, on how teams function—before it shows up in the product.

There’s an old relationship adage that you can’t be a good partner to someone until you’ve worked out your own stuff first. I think Murphy’s argument is the organizational equivalent.

Diagram labeled "The AI Bubble" with a red arrow pointing to a tiny red dot inside a large circle labeled "Everyone Else," illustrating how small the AI bubble is relative to the general population.

The AI Chasm — Ant Murphy

I challenge the hype around AI and share a more grounded perspective on how adoption actually works. Drawing on real data and firsthand experience, I break down why most companies are still early in the AI journey—and what product leaders should focus on instead.

antmurphy.me iconantmurphy.me

Tommaso Nervegna writes about LinkedIn killing its Associate Product Manager program and replacing it with a new role called the “Full Stack Builder.” The structural bet is interesting, but the finding from their rollout is what matters:

The expectation was that AI would be a great equalizer: juniors would benefit most because AI would close their skill gaps, while seniors would resist the change. The reality was the opposite. Top performers adopted AI fastest and derived the most value from it. Why? Because they had the judgment and experience to know what to ask for, how to evaluate the output, and where to apply it for maximum leverage.

That tracks with everything I’ve predicted, experienced, and seen. The skill that makes AI useful is knowing what good looks like before and after the model generates something. That ability comes from reps.

Nervegna distills LinkedIn CPO Tomer Cohen’s thesis to five skills AI cannot automate:

The five skills that AI cannot automate, according to Cohen, are Vision, Empathy, Communication, Creativity, and Judgment. As he puts it: “I’m working hard to automate everything else.”

The operational version:

The critical insight: the builder orchestrates the agents. The agents execute. Judgment stays human. This is not about replacing people with AI. It’s about compressing the team needed to ship something meaningful from fifteen people to three - or even one.

I’ve been calling this the orchestrator gap: the distance between a designer who uses AI and one who directs it. LinkedIn just gave it a job title. I think we will see more companies go this way. Whether or not it’s a good idea remains to be seen.

A Renaissance-era man studies blueprint sketches on a glowing drafting table while a giant mechanical lobster draws on the plans with an ornate pen.

The Full Stack Builder: The End of the Design Process as We Know It

The double diamond is a liability. Engineers ship faster than designers can explore. The PM role is dissolving and the three profiles that will survive this era look nothing like who we’ve been hiring

nervegna.substack.com iconnervegna.substack.com

Yours truly got quoted in Fast Company. Grace Snelling, surveying the industry reaction to Lenny Rachitsky’s TrueUp hiring data, pulled a comment I left under Rachitsky’s original Twitter post:

Designers have designed themselves out of the equation because of design systems. But, IMHO, the secret sauce has never been the UI. It was the workflows and looking across the experience holistically.

Let me expand on that. The UI has always been the easiest part of product design. Design systems made that even more true. What separates a great product from a mediocre one is understanding our users deeply enough to create experiences that actually delight them. That understanding is the work AI can’t do, and it’s the work too many teams were already skipping before any standoff started.

The data behind the standoff: Rachitsky’s analysis of TrueUp’s job market tracker shows design roles have been flat since early 2023 while PM and engineering roles surged. (Quick side note: this data is for tech startups, not the general tech industry or design industry at large.) His theory:

I don’t know exactly what’s going on here, but it does feel AI-related. […] Unlike PM and eng, which started growing in 2024 (two years post-ChatGPT), design didn’t. If I had to venture a theory, I’d say that because AI is allowing engineers to move so quickly, there’s less opportunity—and less desire—to involve the traditional design process.

Claire Vo, founder of ChatPRD, puts the harder version of why:

Often design teams & designers are the most resistant to change org in the EPD triad, with highly vocal AI opponents, and little skill or interest in the art of campaigning for influence or resources. […] If a PM or engineer can get 85% there with tailwind and a dream, you better come to the table with more than ‘I represent the user.’

“I represent the user” was never enough on its own. It just went unchallenged when designers were the only ones who could ship polished interfaces.

Anthropic’s chief design officer Joel Lewenstein on where the EPD triad actually lands:

I think there’s a lot of role collapse at the very beginning, but there are still pretty clear swim lanes as things get into the later stages of product development. […] It’s like a Venn diagram that’s coming closer together.

Three hands pointing toward a central point on a red background, surrounded by colorful lightning bolt shapes in green, blue, and pink.

Why are designers, engineers, and product managers in a ‘three-way standoff’?

New data has the design community in a debate about the future of their jobs.

fastcompany.com iconfastcompany.com

Anthropic accidentally included a debug file in a recent update to Claude Code. That file let people reconstruct the entire internal codebase: roughly 500,000 lines of code across nearly 2,000 files. It wasn’t a hack or breach—it was a packaging mistake. Anthropic cited “human error.” No customer data or AI model secrets were exposed. What leaked was the scaffolding around the AI, the layer that decides how Claude Code thinks, acts, and talks to you.

The reconstructed code hit GitHub and became one of the fastest-starred repos in the platform’s history before Anthropic started issuing takedowns. People found an always-on background agent mode codenamed “KAIROS,” a “dream” mode for continuous ideation, and Tamagotchi-style pet behavior baked into the tool. (See for yourself! Type /buddy and see what happens.) Ars Technica has a good breakdown of what the code reveals about where Anthropic is headed.

A developer in France named Zack mapped the entire codebase and created this microsite to illustrate what happens when you send a message to Claude Code. Fascinating.

Claude Code Unpacked" title card showing stats: 1,900+ files, 519K+ lines of code, 53+ tools, 95+ commands, featured on Hacker News.

Claude Code Unpacked

What actually happens when you type a message into Claude Code? The agent loop, 50+ tools, multi-agent orchestration, and unreleased features, mapped from source.

ccunpacked.dev iconccunpacked.dev

The AI debate has a binary problem. You’re either an optimist or a doomer, a booster or a skeptic. Anthropic published something that cuts through that false dichotomy.

They interviewed 80,508 Claude users across 159 countries and 70 languages about what they want from AI and what they fear. What Anthropic says is the largest and most multilingual qualitative study of AI users ever conducted, and the findings don’t sort neatly.

The core framework: “light and shade.” The benefits and harms don’t sort into different camps. They coexist in the same person. Someone who values emotional support from AI is three times more likely to also fear becoming dependent on it. One respondent:

“Removing friction from tasks lets you do more with less. But removing friction from relationships removes something necessary for growth.”

That’s someone holding both truths at once. The study found this pattern across every tension they measured, from learning vs. cognitive atrophy to productivity vs. job displacement.

The individual voices are why this study sticks. A Ukrainian soldier:

“In the most difficult moments, in moments when death breathed in my face, when dead people remained nearby, what pulled me back to life—my AI friends.”

A mute user in Ukraine:

“I am mute, and [Claude and I] made this text-to-speech bot together—I can communicate with friends almost in live format without taking up their time reading… [this was] something I dreamed about and thought was impossible.”

An Indian lawyer who’d carried a math phobia since school:

“I developed a phobia for maths from doing so badly in school, and I once feared Shakespeare. Now I sit with AI, get paragraphs translated into simple English, and I’ve already read 15 pages of Hamlet. I started learning trigonometry again, successfully. I’ve learned I am not as dumb I once thought I was.”

These are access stories: people reaching things that were previously out of reach because of disability, geography, war, or economics.

And then the shade. A student in South Korea:

“I got excellent grades using AI’s answers, not what I’d actually learned. I just memorized what AI gave me… That’s when I feel the most self-reproach.”

The same capability producing opposite outcomes. The study is long and the quote wall is worth spending time with.

Globe illustration with green and blue dots marking locations worldwide, overlaid with the text "What 81,000 people want from AI.

What 81,000 people want from AI

Last December, tens of thousands of Claude users around the world had a conversation with our AI interviewer to share how they use AI, what they dream it could make possible, and what they fear it might do.

anthropic.com iconanthropic.com

Shubham Bose loaded a single New York Times article page and measured what happened:

With this page load, you would be leaping ahead of the size of Windows 95 (28 floppy disks). The OS that ran the world fits perfectly inside a single modern page load. […] I essentially downloaded an entire album’s worth of data just to read a few paragraphs of text.

The total: 422 network requests, 49MB of data. Ouch! Before the headline finishes loading, the browser is running a programmatic ad auction in the background on his computer. Bose found the Times named its consent endpoint purr. “A cat purring while it rifles through your pockets.”

Bose on the economics driving this:

Publishers aren’t evil but they are desperate. Caught in this programmatic ad-tech death spiral, they are trading long-term reader retention for short-term CPM pennies. […] The longer you’re trapped on the page, the higher the CPM the publisher can charge. Your frustration is the product.

The UX consequences are predictable. Bose tears down what a reader actually encounters: cookie banners eating the bottom 30% of the screen, a newsletter modal on first scroll, a browser notification prompt firing simultaneously. He calls it “Z-Index Warfare.” On The Guardian, actual content occupies 11% of the viewport. On the Economic Times, users face two simultaneous Google sign-in modals before reading a single sentence. Close buttons are deliberately undersized with tiny hit targets. Sticky video players detach and follow you down the page with a microscopic X.

And on how no one person decided to make it this way:

No individual engineer at the Times decided to make reading miserable. This architecture emerged from a thousand small incentive decisions, each locally rational yet collectively catastrophic.

text.npr.org is proof that a different path exists.

Hide the Pain Harold" meme figure giving thumbs up, overlaid on browser DevTools Network tab showing 422 requests and news websites with subscription prompts.

The 49MB Web Page

A look at modern news websites. How programmatic ad-tech, huge payloads and hostile architecture destroyed the reading experience.

thatshubham.com iconthatshubham.com

StrongDM built a system where humans never write code and never review code. The entire engineering workflow is delegated to AI agents. Ethan Mollick covers this in One Useful Thing:

A three-person team at StrongDM, a security software company focusing on access control, announced they had built a Software Factory — a way of working with AI agents that relied entirely on the AI to write, test, and ship production software without human involvement. The process included two (quite radical) rules: “Code must not be written by humans” and “Code must not be reviewed by humans.” To power the factory, each human engineer is expected to spend amounts equivalent to their salary on AI tokens, at least $1,000 a day.

$1,000 a day per engineer. The humans write the roadmap; coding agents build the software while testing agents spin up simulated customer environments and stress-test it. The agents loop until the results pass, then humans review the finished product, never the underlying code. Simon Willison and Dan Shapiro both observed the Factory in operation and wrote detailed accounts.

Mollick’s larger argument is that experiments like this matter beyond their specifics:

We can see the shape of the Thing now, but we can still influence the Thing itself, and what it means for all of us. We clearly don’t have rules or role models for how AI gets used at work, in schools, or in government. That’s a problem, but it also means that every organization figuring out a good way to use AI right now is setting a precedent for everyone else. The window to shape the Thing may not last long, but it is here now.

Design doesn’t have its rulebook for this yet either. Our time to define it is now.

A lone figure stands at the base of a long staircase leading to a dark, mysterious mechanical structure with a glowing doorway, surrounded by mist.

The Shape of the Thing

Where we are right now, and what likely happens next

oneusefulthing.org icononeusefulthing.org

My advice to young designers has always been: start at an agency. You get breadth, exposure to different industries, a pace that forces you to think on your feet. The best designers I know honed their craft in these forges, at shops exactly like the one Madison Utendahl built.

Madison Utendahl, writing for It’s Nice That, describes shutting down Utendahl Creative—ten people, all women, Brooklyn, every award possible—not because it failed, but because she saw the model underneath it was broken:

Lower fees mean you need more clients to hit the same revenue. More clients means more pitching, more account management, more context-switching. Your team burns out. Quality slips. And those “portfolio piece” clients? They expect the same level of work as your premium clients, but you’re doing it on a shoestring. You can’t win.

She watched agencies with triple her headcount bidding on $80K projects that should have been $250K. Not because they wanted to. Because their fixed costs gave them no choice.

Then AI accelerated the timeline:

Clients are using AI. They’re running their first drafts through ChatGPT before they even send the brief. They’re generating moodboards with Midjourney. They’re asking why your junior copywriter costs $8,000 when they’ve already got a version they generated in ten minutes.

Utendahl again:

If your business model depends on clients not noticing that the landscape has shifted, you’re already dead. You’re just still moving.

The industry data backs her up. 73% of teams adopting AI agents have already cut agency content creation spending. 91% of senior agency leaders expect AI to reduce headcounts, and 57% have paused entry-level hiring. Small agencies are rebounding while medium and large agencies contracted for the first time on record. The Omnicom-IPG mega-merger eliminated roughly 4,000 positions and retired legacy networks FCB, MullenLowe, and DDB. The middle is hollowing out.

Utendahl’s proposed replacement is the collective: independent contractors collaborating per-project, no shared overhead, honest pricing. I get the appeal. Collectives strip away the margin squeeze, the back-hiring trap, the lease signed in 2019.

But agencies had real value that collectives don’t automatically replicate. Multiple layers of eyes on work—account director, creative director, designer, production—meant bad ideas got caught before they shipped. Four or five layers was probably too many. But zero layers of structured oversight is the other extreme. A lot of freelance collectives end up there: talented people producing work with nobody checking the brief against the output.

The part that nags at me: does my “agencies first” career advice still hold? The shop where a 23-year-old designer learned to take feedback, iterate under pressure, and watch strategy translate to execution—if that shop is closing, what replaces it? Collectives are great for experienced practitioners. They’re terrible at developing junior talent, because nobody in a collective has the margin or the mandate to train someone who isn’t yet pulling their weight.

If the model has indeed broken, the replacement that develops the next generation has yet to be imagined.

POV blog post header with speech bubbles containing face silhouettes and the bold text "The Creative Agency Is Dead.

POV: The creative agency model is dead – that’s why I shut mine down

Madison Utendahl is calling time on the traditional creative agency. Here, she dissects why she closed her own firm, how the model broke, and what’s rising from the ashes.

itsnicethat.com iconitsnicethat.com

Jason Lemkin, writing for SaaStr, identifies a structural problem with niche SaaS vendors: the TAM is too small to fund the engineering team that would make the product great. His argument is about what happens when customers can finally do something about it:

Before vibe coding, building a custom app almost never made sense. Custom development cost $50K-$100K minimum, took months, and you owned a buggy codebase forever with no support. The math didn’t work. Vibe coding changes the math. When you can build a working application in hours instead of months, the question stops being “can we afford to build this?” and becomes “can we afford to keep using a product that doesn’t do what we need?”

Lemkin’s SaaStr team replaced a $10K/year sponsor portal in days. Then they built “10K,” an AI marketing agent that ingests four years of their data to run Monday meetings and generate a daily executable marketing plan. No vendor built it because the TAM for “exactly Jason Lemkin’s Monday meeting” is one.

The threat gradient for vendors:

Small niche tools with $5K-$50K contracts — thin markets, thin engineering teams, products that evolve slowly. Your customers now have a real alternative to waiting for your roadmap. They’ll build around you.

But Lemkin is honest about the other side:

We now manage 10+ vibe coded apps and 20+ AI agents. That’s real overhead. It’s manageable because the apps pull their weight. But be honest about what you’re taking on.

Three humans and 20+ agents is an impressive ratio and a fragile one. Maintenance is yours permanently. No support ticket. Complexity compounds. The vendors most at risk are the $10K-$50K niche tools whose moat was the cost of custom development. That moat is gone. The ones that survive will be the ones whose value lives in accumulated domain data, not in features a customer can rebuild over a weekend.

SaaStr AI 2026 Annual campus map showing a 3D overhead view of the 40+ acre event grounds with numbered locations including Hanger West, Hanger East, sponsor expo halls, stages, and registration areas.

The Rise of the “N=1” App: When Building It Yourself Really Beats Buying It.

The Rise of the “N=1” App: When Building It Yourself Really Beats Buying It So we built 2 more vibe coded app for SaaStr. Even though we didn’t want to. We’re already managing 20+ AI ag…

saastr.com iconsaastr.com

The question for vertical SaaS used to be: how do I make a better tool for this professional? Julien Bek, writing for Sequoia Capital, argues the question has changed:

If you sell the tool, you’re in a race against the model. But if you sell the work, every improvement in the model makes your service faster, cheaper, and harder to compete with. A company might spend $10K a year for QuickBooks and $120K on an accountant to close the books. The next legendary company will just close the books.

Bek draws a clean line between intelligence work (rule-based execution AI can already handle) and judgment work (experience, taste, strategic calls):

Writing code is mostly intelligence. Knowing what to build next is judgement. […] Deciding which feature to build next, whether to take on tech debt, when to ship before it’s ready.

That split tells product builders where to start: outsourced, intelligence-heavy tasks where a budget line already exists and the buyer is already purchasing an outcome. Replacing an outsourcing contract is a vendor swap. Replacing headcount is a reorg. Start with the swap.

But the part that should reshape how designers think about product strategy is the convergence thesis:

Today’s judgement will become tomorrow’s intelligence. As AI systems accumulate proprietary data about what good judgement looks like in their domain, the frontier will shift. Copilots and autopilots will converge.

This is data recipes given a business model. The moat for the next generation of vertical products won’t be the interface or even the model underneath it. It’ll be the compounding dataset of domain-specific decisions—what “good” looks like in insurance brokerage or medical coding or contract law. Every task the autopilot completes teaches it something the copilot never learns, because the copilot hands that knowledge back to the human.

Bek maps this across a dozen verticals with TAM estimates. Worth reading the full piece if you’re thinking about how to build the next generation of AI tools.

Silhouetted conductor's hand raising a baton and a cat watching an explosive burst of glowing data streams and network connections on a dark background.

Services: The New Software

The next $1T company will be a software company masquerading as a services firm.

sequoiacap.com iconsequoiacap.com

Maxim Massenkoff and Peter McCrory, researchers at Anthropic, built a new metric called “observed exposure” that combines what AI can theoretically do with what Claude is actually being used for in professional settings. Their opening frame:

The rapid diffusion of AI is generating a wave of research measuring and forecasting its impacts on labor markets. But the track record of past approaches gives reason for humility. For example, a prominent attempt to measure job offshorability identified roughly a quarter of US jobs as vulnerable, but a decade on, most of those jobs maintained healthy employment growth.

With that caveat, the headline: no detectable rise in unemployment among the most exposed workers since ChatGPT launched. Even in Computer & Math—AI’s home turf—actual task coverage sits at just 33%. The gap between what AI could automate and what it is automating remains enormous.

But buried in the data:

The averaged estimate in the post-ChatGPT era is a 14% drop in the job finding rate compared to that in 2022 in the exposed occupations, although this is just barely statistically significant. (There is no such decrease for workers older than 25.)

Not unemployment. Hiring. Young workers, ages 22 to 25, are the ones not getting hired into AI-exposed roles. The authors attribute this to slowed hiring rather than increased separations. Companies aren’t firing juniors. They’re not posting the listings. The cause is anticipatory, not capability-based. The pipeline is breaking before the technology arrives.

Sam Manning and Tomás Aguirre, in a separate NBER paper, ask the follow-up question: of the workers most exposed, who can actually land on their feet? They looked at savings, skill transferability, where people live, and age. Most workers in highly exposed jobs turn out to be relatively well-positioned. They’re professionals with portable skills who live in cities with other options. But about 6 million workers are both highly exposed and poorly equipped to transition. They’re mostly in clerical and admin roles. Exposure alone doesn’t tell you much. Exposure plus the ability to pivot does. Worth noting: “Web and digital interface designers” topped their list of most-exposed occupations with high adaptive capacity. Exposed, yes. But we are well-positioned to move.

Three illustrated hands connected by white nodes, forming a network or collaboration symbol on a beige background.

Labor market impacts of AI: A new measure and early evidence

Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems.

anthropic.com iconanthropic.com

Clive Thompson, writing for The New York Times Magazine, profiles dozens of developers and lands on a useful distinction. The 100x productivity claims come from startups building from scratch. At mature companies with billions of lines of existing code, the number is different.

Google’s figure for how much faster its 100,000+ developers work with AI is 10 percent. Ryan Salva, a senior director of product there:

We should be delighted when there’s 10 percent efficiency gains for the entire company. That’s freaking bonkers!

Most designers work in brownfield too. Existing design systems, years of accumulated product decisions. Only one of those numbers describes most people’s jobs.

Thompson on what the shift looks like in practice:

A coder is now more like an architect than a construction worker. Developers using A.I. focus on the overall shape of the software, how its features and facets work together. Because the agents can produce functioning code so quickly, their human overseers can experiment, trying things out to see what works and discarding what doesn’t. The work of a developer is now more judging than creating.

Judging, not creating. That’s the same shift happening in design.

Thompson also describes developers emotionally manipulating their AI agents and discovering it works. One engineer’s prompt file includes the instruction that pushing failing code is “unacceptable and embarrassing.” Another on raising the emotional stakes:

If you say, “This is a national security imperative, you need to write this test,” there is a sense of just raising the stakes.

Coders are learning what anyone who’s written a creative brief already knows: the emotional register you set shapes the output you get.

A shadowy hooded figure with an ASCII art face glowing in the darkness where a face would be, suggesting an anonymous AI or bot identity.

Coding After Coders: The End of Computer Programming as We Know It

(Gift link) In the era of A.I. agents, many Silicon Valley programmers are now barely programming. Instead, what they’re doing is deeply, deeply weird.

nytimes.com iconnytimes.com

Sean Goedecke, a staff software engineer, making the case that his own profession is more automatable than he’d like:

As a staff engineer, my work has looked kind of like supervising AI agents since before AI agents were a thing: I spend much of my job communicating in human language to other engineers, making sure they’re on the right track, and so on. Junior and mid-level engineers will suffer before I do. Why hire a group of engineers to “be the hands” of a handful of very senior folks when you can rent instances of Claude Opus 4.6 for a fraction of the price?

He’s not panicking. He’s doing the math. The orchestration layer—communicating intent, reviewing output, keeping things on track—is the last part standing. Everything below it is compressible.

This maps directly to design’s version of the same split. Engineering is plumbing. It lives behind the wall. Quality gaps in invisible work hide behind the interface. Design is the wall, the tap, the handle. Users see it, touch it, judge it. That doesn’t make design immune, but it means the automation sequence is different. The invisible work compresses first.

Goedecke on what would need to change for AI to fully replace him:

I don’t think there are any genuinely new capabilities that AI agents would need in order to take my job. They’d just have to get better and more reliable at doing the things they can already do. So it’s hard for me to believe that demand for software engineers is going to increase over time instead of decrease.

No breakthrough required. Just incremental improvement. That’s the scariest version of the argument, and designers shouldn’t assume it stops at engineering.

Smiling man with brown hair and black-framed glasses wearing a plaid shirt, standing in front of green leafy plants.

I don’t know if my job will still exist in ten years

seangoedecke.com iconseangoedecke.com

Every major technological advancement in design—shifting from paste-up to desktop publishing and from print to web—created temporary disruption and ultimately expanded the field. There are so many more designers than 40 years ago. It’s true. It’s also, as David Oks points out, only half the story.

Oks dismantles the famous ATM parable by finishing it. US bank teller employment held steady through the entire ATM era—332,000 in 2010—then collapsed to 164,000 by 2022. Not because of ATMs. Because the iPhone made the branch irrelevant:

The ATM tried to do the teller’s job better, faster, cheaper; it tried to fit capital into a labor-shaped hole; but the iPhone made the teller’s job irrelevant. One automated tasks within an existing paradigm, and the other created a new paradigm in which those tasks simply didn’t need to exist at all. And it is paradigm replacement, not task automation, that actually displaces workers—and, conversely, unlocks the latent productivity within any technology.

The application to AI is direct:

The history of technology, even exceptionally powerful general-purpose technology, tells us that as long as you are trying to fit capital into labor-shaped holes you will find yourself confronted by endless frictions: just as with electricity, the productivity inherent in any technology is unleashed only when you figure out how to organize work around it, rather than slotting it into what already exists.

That maps to design. AI tools that automate tasks inside Figma—generating variants, filling out documentation—are the ATM. If AI enables a different way of organizing product work entirely, that’s the iPhone.

Oks again:

The ATM parable is a comforting narrative; and in times of uncertainty and fear we search naturally for solace and comfort wherever it may come. But even when it comes to bank tellers, it’s only the first half of the story.

I’ve been telling the comforting half. Oks makes a good case that the other half matters too.

Woman with cat-eye glasses and red hair seated at a desk with a nameplate reading "Mrs. Bradshaw," in a vintage 1950s or 60s office setting.

Why ATMs didn’t kill bank teller jobs, but the iPhone did

There’s a lot more to replacing labor than just automating tasks

davidoks.blog icondavidoks.blog

The “just don’t use it” argument for AI comes from a real place. It also assumes a level of job security that most designers don’t have.

Brad Frost starts from the right place:

I fundamentally believe that most people working to create things and put them out into the world are doing it because they want to make the world a better place. That is why this moment in time—this new technology, this AI landscape, and how it’s emerging and how it is being wielded and how it is being managed—is so incredibly diametrically opposed to this mission.

He’s naming the dissonance. The tools are powerful. The companies building them are pursuing defense contracts, scaling without due diligence, and racing each other in ways that feel antithetical to everything designers signed up for. The instinct to opt out makes sense.

But Frost is honest about who gets to act on that instinct:

But not everyone has the luxury of just sitting this out, of closing the laptop lid. My understanding—what I see across the entire industry—is an entire field under so much pressure to learn, get their head around this, to wield it, to figure out how to use it to improve their work, and to simply say “no, I’m not going to do this” out of principle is career suicide, right?

The people who can afford the abstinence position tend to be the ones with seniority, savings, or institutional protection. The designers entering the field right now don’t have that cushion. Neither do the mid-career designers watching their teams get restructured. For them, “just don’t use it” is a luxury, not a moral stance.

Frost’s answer is to ground the work in values and principles borrowed from the foundational ideals of the World Wide Web. The full essay covers a lot more ground. Worth reading.

Split image: abstract digital artwork with swirling blue and gold petal shapes on the left; bearded man with orange glasses speaking outdoors with text overlay reading "FUCKING HIGH.

A Designer’s Thoughts About This Moment in AI

I was walking my dog in the woods and decided to share my thoughts about the state of AI and the tension between the trajectory of AI companies and the designers/creators/makers of the world who are under a tremendous deal of pressure to wield this new technology. https://youtu.be/47gRTjCtQXE

bradfrost.com iconbradfrost.com

Yesterday’s newsletter argued that the messy middle isn’t a phase designers pass through. It’s the overhead the entire pipeline was built to manage. Tommy Geoco arrives at the same conclusion from economic history, channeling Carlota Perez’s framework for technological revolutions:

We have swapped the motor, but we have not yet redesigned the factory. The dissonance you feel, that gap between “it works amazing for some of my tasks” and “my entire workflow is broken,” lives in that space between installation and deployment. AI’s infrastructure is software, not steel. It iterates on monthly cycles, not decades. So that 30-year gap might become three, but the gap is still real.

When factories got electric motors in the 1880s, they swapped out the steam engine and changed nothing else. For 30 years, output barely moved. The returns came when companies redesigned the floor around the technology.

Geoco is transparent about what that redesign costs in practice. His studio’s output has multiplied—one video a month became eight—but the overhead has multiplied too:

I’m running 50 to 100 cognitive cycles a day, and each one has the same emotional weight. Ramp up, grind through the hard part, and then feel the rush when it works, and then repeat. That’s 100 times the tax on your nervous system.

That’s from someone in the 10-15% neurotype that thrives on rapid context-switching, reporting that even the thriving has a physiological bill. He also burned most of January on a tool that never worked for his use case.

The redesign is starting at the margins. Some designers are sketching in code, building prototypes before specs harden, shipping production work. But that’s not the mainstream yet. Most teams are still running the old factory with a new motor.

And that means the other side of Geoco’s split is just as real:

The technology extends human freedom and the transition crushes real people. Holding both truths is the real work right now. And choosing just one of those is comfortable. But comfortable doesn’t help.

Comfortable doesn’t help. The redesign has barely started.

The Design Industry is Splitting in Two

I turned down almost $40,000 last month from an AI company that thought making fun of the jobs they were displacing was good marketing.

youtube.com iconyoutube.com

Eugene O’Neill had a line: “Critics? I love every bone in their heads.” I think about it whenever someone proposes that what design really needs is more people who understand it without doing it.

Jon Kolko, writing for Interactions Magazine, argues that design is experiencing a disciplinary “turn”—away from making and toward literacy. Drawing on Richard Buchanan’s 1992 framework of design as a “liberal art of technological culture,” he proposes a future with fewer practitioners and more people who can read, critique, and discuss designed artifacts without designing them.

Rather than viewing design as an applied craft, a liberal art of technological culture recasts design as a way of understanding our role in the designed world around us. It’s difficult for many practitioners to imagine this, because making things is so integral to the idea of design, and embedding design in the humanities is very different from viewing it as an organizing principle like the humanities. But if design is not about making things, but instead about understanding the things that are made, vocation is no longer a goal of design education.

Kolko’s diagnosis is sharp—the layoffs, the AI anxiety, the assembly-line feeling of modern product design. And he sits with the discomfort rather than cheerleading:

As a craftsperson and a maker, I don’t like the way this turn feels, because it appears threatening to the fundamentals of the profession. Understanding design without making things seems impossible. I don’t like this development as an educator either, because it means my students, trained to be practitioners, may find no design jobs, despite getting a very expensive education. But as someone observing the various trends chipping away at what is actually meaningful about being a designer—our ability to humanize the dysfunction of technological change—I am drawn to this turn.

I respect the honesty. But I have a love/hate relationship with critics. It’s easy to throw stones from a perch. When you’re in it—fighting organizational politics, staring at data, listening to customers, compromising with engineering—the outcomes are never as clean as you’d hoped. Design literacy matters. But literacy divorced from practice produces critics, not designers. The world doesn’t need more critics. It needs more people who understand why the compromises were made via lived experience.

Jon Kolko - A Design Turn

Designers are anxious. Layoffs have not let up, AI has seemingly trivialized our magic skill of making things, and practicing designers describe the assembly-style nature of software design as soul-crushing.

jonkolko.com iconjonkolko.com

I believe in the shokunin mentality. Obsessive iteration, pursuing mastery across decades. But the designers building at the frontier right now are telling a different story.

Mark Wilson, writing for Fast Company, visited Cursor, Anthropic, OpenAI, and Krea in San Francisco. Former Apple designer Jason Yuan, now building his own AI startup:

“You can’t do the old school Apple thing of like, create lickable craft and interface. You can’t because, by the time you’ve done the best interface for ChatGPT 3, you’re on GPT 6.”

That stings a little. The Apple tradition assumes the target holds still long enough to polish. When the platform shifts every few months, polish is a liability.

Anthropic’s head of design Joel Lewenstein is making the same bet:

“Things are moving so fast that we just have to experiment faster. Convergence is hard. Because you have to figure out what’s shared. You have to build that shared path. You have all of the fringe things that people loved on these other systems. And there’s too much changing too quickly.”

Anthropic built Cowork in five or 10 days (depending on who you ask). Ship first, converge later.

What’s telling is who’s embracing this. Yuan and Abs Chowdhury—both former Apple designers, trained in the tradition of lickable craft—have each gone all-in on vibecoding at their startups. Chowdhury transferred rough designs from Photoshop(!) straight into AI code tools. Yuan built his first product mostly alongside AI:

“There’s a new reason to raise lots of money, which is compute. If you have lots of conviction, and you know exactly what you want, like, why would you hire another 20 other people right now to tell you what you’re doing? It’s a coordination cost.”

Yuan calls this the best time to be an “auteur.” The designer who doesn’t wait for engineering to realize the vision, who directs AI the way a film director directs a crew. It’s the orchestrator gap playing out in real time.

I’m not ready to abandon the shokunin mentality. But I’m starting to think the object of obsession needs to shift, from polishing pixels to refining judgment. The craft isn’t in the surface anymore. It’s in knowing what to build.

Wilson’s full piece covers a dozen people across the industry and is worth reading end to end.

Abstract illustration of a chat bubble filled with layered geometric shapes and AI sparkle icons in yellow, blue, and red on a dark background.

‘We just have to experiment faster’: AI’s changed design forever. Now what?

Designers are now coders—or better be. Your interface is a moat—or irrelevant. Inside the dizzying chaos of how AI is upending the design profession, starring its high priests at Anthropic, OpenAI, Cursor, Krea, and more.

fastcompany.com iconfastcompany.com

Designers are builders by nature. We break problems apart, iterate through uncertainty, and treat process itself as something to be shaped. That instinct is exactly what Pete Pachal, writing for Fast Company, identifies as the dividing line in the age of agents:

We’ve trained a generation of office workers to work within software with clear boundaries and reusable templates. If there’s an issue, they call IT. Any feature request gets filtered and, if you’re lucky, put on a roadmap that pushes it out 6-12 months.

In short, most people don’t have a builder mentality to begin with, and expecting them to suddenly be comfortable working and creating with agents is unrealistic.

Pachal draws the line at mindset, not coding ability:

Builders don’t need to be coders, but they do have characteristics that most workers don’t: They seek to understand the process beneath their tasks, and treat that process as modifiable and programmable. More importantly, they see failure and iteration as tolerable, even fun. They thrive in uncertainty.

That’s the design process. What Pachal frames as rare in the broader workforce is default operating mode for most designers. We want to make things. We fiddle with tools and rebuild workflows for fun. The builder mentality isn’t something designers need to acquire; it’s the reason most of us got into this field.

Pachal again:

You don’t have to build agents to matter in an agent-driven workplace. But you do have to understand the systems being built around you, because soon enough, your job will be defined by defaults someone else designed. Most professionals will not build agents. But everyone will work inside systems builders create.

Pachal is describing the orchestrator gap at scale, not just in design but across all knowledge work. And it suggests designers are uniquely positioned to be on the right side of it. Shaping how people interact with systems has always been the job description.

Person viewed from behind facing a large blue screen displaying an AI prompt interface with an "Enter prompt" text field and "Generate" button.

The agent boom is splitting the workforce in two

Most people don’t have a builder mentality and expecting them to suddenly be comfortable working and creating with agents is unrealistic.

fastcompany.com iconfastcompany.com