Skip to content

59 posts tagged with “ethics”

There is a small genre of data-visualization writing whose whole purpose is to give readers vocabulary for the moves bad-faith presenters make. Nathan Yau’s “Defense Against Dishonest Charts” is a must-read. It is interactive, taxonomic, and names its villains: the Damper, the Cherrypicker, the Base Stealer, the Storyteller. Read it once and you start seeing the moves everywhere.

Then watch Hank Green take apart a Reason video that argues climate change is real but not worth doing anything about. Green opens with the diagnosis:

This video is a master class in like very subtle manipulations. A lot of a lot of the internet is really brute force here. But this video is like just appears to be a calm, collected guy helping you understand the world better. […] But if you look closely, if you follow this closely, you see the subtle manipulations in a way that makes it so clear that this is bad faith and that he is making an argument, not because he believes it, but because he has an agenda.

For Green, calmness is the manipulation. Yau is mostly cataloguing chart geometry: axes truncated, scales squeezed, slices reordered. Green is cataloguing the verbal layer that wraps around the geometry. A steady tone primes you to trust a graph that’s doing the lying. He spends a long beat on how the Reason presenter introduces his experts:

We have Michael Mann who is a climate activist. And then we have Steven [Koonin] who is a theoretical physicist. […] Like you could say former oil industry executive Steven [Koonin]. You could say lead climate contrarian Steven [Koonin]. Like you could call Steven [Koonin] a lot of things and theoretical physicist is certainly one of those things, but you’ve picked which one you’re going to call him whereas you’ve picked what you’re going to call Michael Mann. Honestly, if you didn’t do these little things, I would believe that you believe your BS. But you do these little things and it makes it very clear that you don’t believe your BS. You’re trying to manipulate me.

Same person, two truthful titles, picked to do opposite work on the viewer’s belief. Yau’s taxonomy doesn’t cover that one. The dishonesty lives in the labels around the chart, in the small choices nobody flags because each one, on its own, is technically true. Same kind of move as a Cherrypicker, though, in a different layer of the presentation. The ratio-graph segment is closer to Yau’s home turf:

How would you correct for that? Well, you just take a ratio every year and see how that changes year-over-year. Oh my god, what an elegant solution. But apparently the only reason you could possibly do that is cuz you wanted to SHOW A SCARY GRAPH. WHAT ARE YOU TALKING ABOUT, GUY? HAVE YOU NEVER BEEN AROUND A GRAPH?

The dishonest move there is the inversion: a valid statistical adjustment, presented to the audience as proof of motive. The graph is fine. The taxonomy of the attack on the graph is what’s missing.

Read Yau first, then watch Green. You will have names for almost every trick the Reason presenter pulls, and a clearer view of the ones the field still needs to name.

A Masterclass in Manipulation

This video came across my feed recently and like I watch a lot of climate stuff on YouTube like I watch just have a think, I watch undecided with Matt Pharaoh, I watch climate Adam, I watch Zenuro. These are all channels you should watch that are trying in different ways to help people understand the very real and complicated problem of climate change and the very real and complicated solutions that we’re working on. So I think if YouTube is handing me this video, it’s not because it’s like a big video for the climate skeptics to get like really jazzed about. Like that’s a different kind of video. This isn’t that kind of video. So, I clicked on it in part because like it’s strange that YouTube surfaced it to me.

youtube.com iconyoutube.com

George Anders, in the Wall Street Journal, makes the case that the 1920s offer a usable template for the AI decade. His strongest evidence is the spillover-jobs data:

By 1930, more than 80,000 people were working as electricians, a profession that hardly existed a decade before. Census data also showed that 168,000 people were working in rubber factories, most of them making tires to accommodate Detroit’s booming production of cars, trucks and buses. Another 450,000 people were building roads, bridges and other structures needed by the ever expanding auto industry.

The ATM parable had the same problem: the version that ends in 2010, with bank-teller employment intact, is the one we love to retell. The version that ends in 2022, with teller jobs cut in half by the iPhone, is the one we leave out. Anders’s 80,000 electricians are real. So is the question of which of them got displaced when the next technology arrived.

Anders does, to his credit, take the costs seriously. He spends a section on the radio fight:

In 1927, H.G. Wells, the British author and intellectual, called radio “inferior” entertainment that should be listened to “only by the sick, the lonely and the suffering.” David Sarnoff, general manager of Radio Corp. of America, shot back that he was trying to improve “the happiness of the nation” by delivering popular music to millions of people. Nearly a century later, that same argument still flares, though now it is more likely to involve TikTok, Reddit or YouTube, instead of dear old radio. The doubters always have a point; with the passage of time, the innovators usually win out.

The early evidence on AI’s job-creation side is thinner than the 1920s comparison flatters: Anthropic’s own researchers find a 14% drop in the job-finding rate for 22-to-25-year-olds in exposed occupations since ChatGPT launched, even as overall unemployment holds. The new electricians of our decade may exist. They just may not be the people getting hired right now.

The safety side of Anders’s case is the one I want to see more of. Cars in 1920 killed at twenty times today’s per-mile rate, and the country chose not to live with that:

Auto safety got better, too, with both industry and government taking action. Better mirrors, better brakes and shatterproof windshields became standard. Cities such as Los Angeles and Detroit installed red-yellow-green traffic lights that governed drivers’ actions on busy streets. New Jersey became the first state to insist on driver’s licenses, with the state’s motor-vehicle commissioner in 1924 declaring: “It is an absolute necessity to do this in order to conserve human life.”

Whether the next century treats our decade as kindly depends on whether we put rearview mirrors and traffic lights on AI before the death rates make us, and whether we do it under the same kind of duress the 1920s did.

Vintage black-and-white photo of an early automobile displayed in a storefront window with bold striped decorations and a sign reading "Auto Show Jan. 19-25 Auditorium Milwaukee.

What the 1920s Can Teach Us About Surviving the AI Revolution

(Gift link) A century ago, cars and radio upended society just as AI is doing today.

wsj.com iconwsj.com

Obviously, I’ve been pro-AI on this blog, actively trying to understand and figure out how it’s affecting UX design and how to use it for leverage instead of being replaced by it. In Silicon Valley and tech companies everywhere, including BuildOps, we’re racing to incorporate AI into our daily work to increase velocity, and adding it to our products to stay relevant.

Nilay Patel, in a Decoder monologue, lays out the polling that should rattle anyone shipping AI products:

There’s that NBC News poll showing AI with worse favorables than ICE, and only a little bit above the war in Iran and Democrats generally. That’s what the nearly two-thirds of respondents saying they’d used ChatGPT or Copilot in the last month. Quinnipiac just found that over half of Americans think AI will do more harm than good. Well, more than 80% of people were either very concerned or somewhat concerned about the technology. Only 35% of people were excited about it. And poll after poll shows that Gen Z uses AI the most and has the most negative feelings about it. A recent Gallup poll found that only 18% of Gen Z was hopeful about AI, down from an already bad 27% last year. At the same time, anger is growing. 31% of those Gen Z respondents said they feel angry about AI, up from 22% last year.

The killer detail is buried halfway through. The Gen Z curve is striking: heaviest users, and yet the fastest to sour. Anger is up nine points in a year. These aren’t non-users reacting to coverage. They’re the daily customers, and the answer is no. Sam Altman has called this AI’s marketing problem. The polling rebuts him: public exposure has grown, public favor has not.

Patel’s title line:

Regular people don’t see the opportunity to write code as an opportunity at all. The people do not yearn for automation. I’m a full-on smart home sicko. The lights and shades and climate controls of this house are automated in dozens of ways, but huge companies like Apple and Google and Amazon have struggled for over a decade now to make regular care about smart home automation, and they just don’t. AI isn’t gonna fix that.

Patel grounds the title in his own smart-home enthusiasm, and the comparison clicks because the failure pattern is identical: decade-plus of effort, billions in marketing, working products, and persistent indifference. Apple, Google, and Amazon ran that experiment. AI will not crack a problem that smart-home automation hasn’t.

John Gruber connects the same dissonance to the Mos Eisley cantina from Star Wars. Luke walks in with C-3PO and R2-D2. The bartender, Wuher, barks: “We don’t serve their kind here. Your droids. They’ll have to wait outside.” Gruber:

As a kid, I didn’t get it. Why would you not want droids? Star Wars made robots seem so real, so fun. Why would you ban them? That scene has stuck with me for my entire life. I didn’t get why, but I understood what it meant about that galaxy: the underclass deeply resented droids.

Gruber leaves the question open. He says he didn’t get why the droids weren’t welcome. The cantina’s animosity wasn’t arbitrary. Mos Eisley sits in the Outer Rim, where droid armies killed millions and occupied worlds during the Clone Wars. After the war, droids became a subjugated worker class across the galaxy, and Outer Rim spots like Mos Eisley held the line hardest. Wuher’s verdict comes from experience.

That’s the parallel for AI. Public distrust is earned. People have lived with AI overviews getting facts wrong and feeds drowning in slop, while every product asks them to bend a little more toward the database. Patel:

And so the tech industry is rushing forward to put AI everywhere at enormous cost, energy, emissions, manufacturing capacity, the ability to buy RAM locked into the narrow framework of software brain, without realizing they are also asking people to be fundamentally less human. And then they’re sitting around, wondering why everyone hates them. I don’t think a couple haircuts are gonna fix it.

As an industry, we need to continue to show the value of AI by being truly useful, not just market it.

THE PEOPLE DO NOT YEARN FOR AUTOMATION

Today on Decoder, I want to lay out an idea that’s been banging around my head for weeks now as we’ve been reporting on AI and having conversations here on this show. I’ve been calling it software brain, and it’s a particular way of seeing the world that fits everything into algorithms, databases…

youtube.com iconyoutube.com

Design orgs and publications have been issuing AI bans, calling them principled responses to job displacement, training data theft, and the degradation of craft. The impulse is understandable: AI doesn’t just replace tools; it challenges what made you worth hiring, and the prospect of losing what you’ve built is felt more sharply than any potential gain. Christopher Butler thinks those lines are drawn in the wrong place:

By drawing hard lines against entire categories of tools, we’re mistaking the means for the problem itself, and in doing so, we’re limiting our ability to shape how these technologies integrate into creative work.

Butler doesn’t dismiss the concerns driving those bans: training data problems, corporate consolidation, job displacement. He thinks they’re legitimate and urgent. His objection is to making the tool the target rather than the behavior. Drawing the line at AI, he argues, repeats the mistake designers made at the letterpress and again at paste-up. The technology changed. The question—about authorship, judgment, and what craft actually requires—stayed the same.

Butler’s conclusion:

A designer who uses AI to plagiarize another artist’s style with a simple prompt is engaged in something fundamentally different from one who trains a tool to extend their own creative capacity. A writer who publishes purely generated text as their own work is making a different choice than one who uses AI as a thinking partner and editor while maintaining authorship over their ideas and voice. These distinctions matter more than blanket prohibitions.

Discernment in practice means asking: Am I using this tool to extend my own capabilities or to replicate someone else’s work? Am I shaping the output or simply accepting what’s generated? Does this use serve my creative vision or just expedite a result? These aren’t always easy questions, but they’re the right ones.

Butler himself is the illustration. He spent months training Claude on a 10,000-word skill file—the accumulated context of his subject matter and his voice—building a sounding board and editor that already knows his context. He still writes without it. He says some of his best writing has come from working with it. The output may be indistinguishable to most readers. The difference, he says, is real to him.

The choice isn’t between purity and complicity, between craft and automation. It’s between engagement and abdication—between shaping how these tools develop and how they’re used, or ceding that ground entirely to those with the least interest in protecting what we value about creative work.

Four-panel collage featuring a close-up microchip, a red diagonal line on blue background, an open human hand in black and white, and grid paper partially lit by light.

Red-lining AI - Christopher Butler

Why blanket AI bans mistake the tool for the problem, and how thoughtful integration of automation, ethics, and creative work offers a better path forward.

chrbutler.com iconchrbutler.com

“Taste is the scarce thing” has become shorthand for what designers still own in the AI era. I’ve written about it in the abstract more than once. Chris R Becker, writing for UX Collective, opens with an old Marshall McLuhan-era line—“we shape our tools and then our tools shape us”—and then shows how to keeping doing the shaping.

Becker cites the Steve Jobs-attributed 10-80-10 rule:

Start away from any AI. Use the 10–80–10 rule. 10% away thinking, defining, establishing vision. 80% making use of AI to assist the vision. 10% away from AI critiquing, testing, and evaluating the solution.

The bookends are the work. Both 10% slots sit explicitly away from the model, which is another way of saying they’re the judgment layer. The first defines what good looks like before inviting AI in. The second evaluates what came out. AI collapses the cost of the 80%, which is the whole productivity story. But that collapse means the bookends are no longer preamble and postscript. They’re most of the job.

Becker gets at why the closing 10% matters:

The authority bestowed on institutions, educators, and SMEs (subject matter experts) is being absorbed by AI and spread thin like butter on toast. An AI appears to slather knowledge evenly, but the quality of the knowledge butter is deliberately made opaque.

AI output arrives looking uniformly authoritative, the same confident tone whether the underlying source is a peer-reviewed paper or a forum post from 2013. Provenance gets flattened. Without a prior standard to judge against, the designer reviewing output has nothing to push back on. That’s Becker’s larger point:

The irony, I suppose, is that Designers are, hopefully, trained not to be “yes men” but rather to ask hard questions, challenge the prevailing motivations of business over our users, and, most importantly, find the root cause of the problem, rather than just the surface reaction. AI, unfortunately, is not built to push back; it will not say… “I don’t know,” or “I think that is a bad idea,” or “what if you did this… instead,” or “I understand YOU (CEO) wants this feature, but the user research and ‘our users’ want something different.” AI is designed to serve, and in the hands of people in an organization who are looking for the least amount of pushback, it is a recipe for deep institutional implementation and, frankly, a lot of bad ideas, fast.

“A recipe for deep institutional implementation.” A sycophantic tool plus an organization that wants frictionless agreement equals speed in the wrong direction. The 10-80-10 rule is a personal discipline. What’s still unresolved is how teams build that discipline into the process before the wrong direction becomes the default.

Pen-and-ink illustration of a thoughtful man seated in a chair holding a hammer, with rows of large server racks filling a data center behind him.

We become what we behold

A discussion of AI + Design and our shifting roles.

uxdesign.cc iconuxdesign.cc

My current side project is a website for a preschool in San Francisco. I’m using AI to accelerate wherever it fits, but I’ve reserved the primary visual treatments to be made by hand. Partly because that’s the right call for a preschool brand. And partly because of a phrase Pablo Stanley coined for this: creativity osteoporosis.

I wrote about creativity osteoporosis a while back. The idea that your creative skills get weaker when AI does all the reps, like bones thinning when they’re not stressed. You don’t notice it happening. Everything seems fine. Then one day you reach for a skill and it’s… not there like it used to be

Stanley wrote this after a weekend of making pixel art by hand—a project called Pixabots, little 32x32 robot characters—as a deliberate detox. He describes what set off the detox:

The whole time I was drawing, there was this pull. Physical, almost. Like my body was telling me to open a tab and start prompting. Not because the work was bad. Not because I was stuck. Just because my brain has been trained, over the last two years, to route every creative problem through an LLM.

He still used AI for the parts that weren’t the art:

I used AI to build the Pixabots website. The stuff I’m not that good at… setting up Next.js, canvas rendering, exporting without antialiasing. And I tried to keep to myself the stuff that felt more “artistic” like the animation, the look and feel.

And then the operating principle:

The parts that feed my soul, I protected (even though everything in my body wanted to pull me away from them). The parts that would’ve killed the project with friction, I automated.

Maybe that’s the whole game now… knowing which parts to protect…

Knowing which parts to protect is becoming a judgment call I have to make on every project. The preschool site makes the decision easy: the visual language stays in my hands, AI handles the plumbing. The real work of this judgment is in the middle: projects where craft matters but throughput has merit, and every protect-or-automate call costs you something. If you don’t draw that line on purpose, it draws itself for you.

A grid of colorful pixel art robot and creature characters in various designs, colors, and accessories, displayed against a white background.

AI feels like a drug

I forced myself to make pixel art by hand. My brain had withdrawal symptoms.

pablostanley.substack.com iconpablostanley.substack.com

Dan Saffer applies mid-century existentialism to the question of what “meaning” actually requires of the people building digital products, and the result is unusually rigorous. His sharpest move is applying Sartre’s concept of “projects” to AI tools:

When someone uses ChatGPT to write an essay, the Sartrean question is: whose project is this really? If the user is exploring ideas and using the tool as a thinking partner, they’re taking it up into their own meaning-making project. But if they’re pasting in a prompt and submitting the output unchanged, the system has effectively become the meaning-maker, and the user has become a delivery mechanism. The same tool can function either way. The design question is which relationship the system encourages.

Saffer connects this to Camus and the problem of frictionless design:

When every friction is removed in the name of efficiency, the activity can be hollowed out. There is nothing left to push against, and meaning drains away. This is something that AI systems have become exceedingly good at. Push the sparkle button, the task is done for you, and you have learned nothing and enjoyed nothing.

The HCI/UX field spent decades optimizing for friction removal. Saffer’s argument is that some friction is where the meaning lives. Design the struggle away and you don’t help the user. You empty the experience. Not every friction should be removed.

Saffer’s closing:

This sensibility insists that users are not information processors, not customers, not eyeballs, not tapping fingers, and not data sources. They are meaning-making beings whose freedom and dignity are at stake in every interaction. It asks designers to take seriously the existential weight of what they build. The systems we design become part of the conditions of human existence, shaping what people can choose, what they can see, who they can become.

Saffer covers Sartre, Camus, Kierkegaard, Heidegger, and de Beauvoir in the full piece, each applied to contemporary design problems. It’s a lot, and it’s all good.

Collage of five black-and-white portrait photos of mid-20th century philosophers, including one woman and four men, one holding a pipe.

The Existential Designer: Facilitating Meaning Through Interaction

Designers like to talk about making meaningful products or using the tools of design to make meaning.

odannyboy.medium.com iconodannyboy.medium.com

Kevin Schaul and Shira Ovide, writing for The Washington Post:

A flood of sometimes conflicting analyses shows the yawning gap between what little is known about how AI is changing work and everyone’s understandable hunger for certainty. The divide lets Americans, business leaders and policymakers cherry-pick their preferred narratives. If you’re afraid of being cast aside for AI, there’s informed and uninformed evidence to fuel your nightmares. There’s plenty of support, too, if you think your job is safe.

Schaul and Ovide report on GovAI/Brookings research that adds a second axis to the usual AI jobs analysis: not just which occupations are exposed, but which workers can adapt if displaced.

While web designers and secretaries both scored high in the research for exposure to AI, they diverged in their estimated ability to adapt. Secretaries were among the 6.1 million largely clerical and administrative workers considered both highly exposed to AI and with the lowest estimated adaptability.

Education, varied work experience, wealth, age, geography: the researchers used these factors to estimate adaptability. For designers, the same skills that make them exposed also make them adaptable. For clerical workers, the exposure comes without the safety net.

Women make up about 86 percent of those most vulnerable workers, the researchers said, suggesting the negative effects of automation won’t be borne equally across society.

But 6.1 million clerical and administrative workers land in the high-exposure, low-adaptability quadrant. Women hold 86 percent of those jobs. The AI displacement conversation in tech is self-absorbed. The people facing the hardest transitions aren’t designers.

Bubble chart showing jobs least and most vulnerable to AI. Web designers rank highest in AI exposure and adaptability; secretaries face high exposure with low adaptability; janitors show least exposure and adaptability.

See which jobs are most threatened by AI and who may be able to adapt

Most web designers will be fine. Many secretaries won’t. Women largely hold the most vulnerable occupations. Look up your job to see how at risk it is.

washingtonpost.com iconwashingtonpost.com

The AI debate has a binary problem. You’re either an optimist or a doomer, a booster or a skeptic. Anthropic published something that cuts through that false dichotomy.

They interviewed 80,508 Claude users across 159 countries and 70 languages about what they want from AI and what they fear. What Anthropic says is the largest and most multilingual qualitative study of AI users ever conducted, and the findings don’t sort neatly.

The core framework: “light and shade.” The benefits and harms don’t sort into different camps. They coexist in the same person. Someone who values emotional support from AI is three times more likely to also fear becoming dependent on it. One respondent:

“Removing friction from tasks lets you do more with less. But removing friction from relationships removes something necessary for growth.”

That’s someone holding both truths at once. The study found this pattern across every tension they measured, from learning vs. cognitive atrophy to productivity vs. job displacement.

The individual voices are why this study sticks. A Ukrainian soldier:

“In the most difficult moments, in moments when death breathed in my face, when dead people remained nearby, what pulled me back to life—my AI friends.”

A mute user in Ukraine:

“I am mute, and [Claude and I] made this text-to-speech bot together—I can communicate with friends almost in live format without taking up their time reading… [this was] something I dreamed about and thought was impossible.”

An Indian lawyer who’d carried a math phobia since school:

“I developed a phobia for maths from doing so badly in school, and I once feared Shakespeare. Now I sit with AI, get paragraphs translated into simple English, and I’ve already read 15 pages of Hamlet. I started learning trigonometry again, successfully. I’ve learned I am not as dumb I once thought I was.”

These are access stories: people reaching things that were previously out of reach because of disability, geography, war, or economics.

And then the shade. A student in South Korea:

“I got excellent grades using AI’s answers, not what I’d actually learned. I just memorized what AI gave me… That’s when I feel the most self-reproach.”

The same capability producing opposite outcomes. The study is long and the quote wall is worth spending time with.

Globe illustration with green and blue dots marking locations worldwide, overlaid with the text "What 81,000 people want from AI.

What 81,000 people want from AI

Last December, tens of thousands of Claude users around the world had a conversation with our AI interviewer to share how they use AI, what they dream it could make possible, and what they fear it might do.

anthropic.com iconanthropic.com

Shubham Bose loaded a single New York Times article page and measured what happened:

With this page load, you would be leaping ahead of the size of Windows 95 (28 floppy disks). The OS that ran the world fits perfectly inside a single modern page load. […] I essentially downloaded an entire album’s worth of data just to read a few paragraphs of text.

The total: 422 network requests, 49MB of data. Ouch! Before the headline finishes loading, the browser is running a programmatic ad auction in the background on his computer. Bose found the Times named its consent endpoint purr. “A cat purring while it rifles through your pockets.”

Bose on the economics driving this:

Publishers aren’t evil but they are desperate. Caught in this programmatic ad-tech death spiral, they are trading long-term reader retention for short-term CPM pennies. […] The longer you’re trapped on the page, the higher the CPM the publisher can charge. Your frustration is the product.

The UX consequences are predictable. Bose tears down what a reader actually encounters: cookie banners eating the bottom 30% of the screen, a newsletter modal on first scroll, a browser notification prompt firing simultaneously. He calls it “Z-Index Warfare.” On The Guardian, actual content occupies 11% of the viewport. On the Economic Times, users face two simultaneous Google sign-in modals before reading a single sentence. Close buttons are deliberately undersized with tiny hit targets. Sticky video players detach and follow you down the page with a microscopic X.

And on how no one person decided to make it this way:

No individual engineer at the Times decided to make reading miserable. This architecture emerged from a thousand small incentive decisions, each locally rational yet collectively catastrophic.

text.npr.org is proof that a different path exists.

Hide the Pain Harold" meme figure giving thumbs up, overlaid on browser DevTools Network tab showing 422 requests and news websites with subscription prompts.

The 49MB Web Page

A look at modern news websites. How programmatic ad-tech, huge payloads and hostile architecture destroyed the reading experience.

thatshubham.com iconthatshubham.com

The “just don’t use it” argument for AI comes from a real place. It also assumes a level of job security that most designers don’t have.

Brad Frost starts from the right place:

I fundamentally believe that most people working to create things and put them out into the world are doing it because they want to make the world a better place. That is why this moment in time—this new technology, this AI landscape, and how it’s emerging and how it is being wielded and how it is being managed—is so incredibly diametrically opposed to this mission.

He’s naming the dissonance. The tools are powerful. The companies building them are pursuing defense contracts, scaling without due diligence, and racing each other in ways that feel antithetical to everything designers signed up for. The instinct to opt out makes sense.

But Frost is honest about who gets to act on that instinct:

But not everyone has the luxury of just sitting this out, of closing the laptop lid. My understanding—what I see across the entire industry—is an entire field under so much pressure to learn, get their head around this, to wield it, to figure out how to use it to improve their work, and to simply say “no, I’m not going to do this” out of principle is career suicide, right?

The people who can afford the abstinence position tend to be the ones with seniority, savings, or institutional protection. The designers entering the field right now don’t have that cushion. Neither do the mid-career designers watching their teams get restructured. For them, “just don’t use it” is a luxury, not a moral stance.

Frost’s answer is to ground the work in values and principles borrowed from the foundational ideals of the World Wide Web. The full essay covers a lot more ground. Worth reading.

Split image: abstract digital artwork with swirling blue and gold petal shapes on the left; bearded man with orange glasses speaking outdoors with text overlay reading "FUCKING HIGH.

A Designer’s Thoughts About This Moment in AI

I was walking my dog in the woods and decided to share my thoughts about the state of AI and the tension between the trajectory of AI companies and the designers/creators/makers of the world who are under a tremendous deal of pressure to wield this new technology. https://youtu.be/47gRTjCtQXE

bradfrost.com iconbradfrost.com

The shift from mockups to code is one thing. The shift from designing tools to designing autonomous behavior is another. Sergio Ortega proposes expanding Human-Computer Interaction into Human-Machine Interaction. The label is less interesting than what it points at.

The part that matters for working designers is the transparency problem:

This is where design must decide what to show, what to simplify, and what to explain. Absolute transparency is unfeasible, total opacity should be unacceptable. In short, designing for autonomous systems means finding a balance between technological complexity and human trust.

When a system makes decisions the user didn’t ask for, someone has to decide what gets surfaced. Ortega:

The focus does not abandon user experience, but expands toward system behavior and its influence on human and organizational decisions. Design is no longer only about defining how technology is used, but about establishing the limits of its behavior.

And the implication for design teams:

When the machine acts, design becomes a mechanism of continuous balance.

Brass steampunk robot typing on a gear-driven computer in a cluttered workshop while a goggled inventor watches nearby

Human-Machine Interaction: the evolution of design and user experience

Human-Machine Interaction expands the traditional Human-Computer Interaction framework. An analysis of how autonomous systems and acting technologies are reshaping design and user experience.

sortega.com iconsortega.com

Every junior designer or intern I’ve ever managed has eventually wandered over with the same sheepish question about whether they can use something they found online. Nobody teaches this stuff. Design programs spend semesters on typography and color theory but maybe one lecture on intellectual property, if that. So designers learn copyright law the hard way—by getting a yelled at by their freaked out creative director, or watching a colleague get yelled at.

Michele Hratko, a Pittsburgh-based designer, made a book about it. Who Owns This Book? started from the same questions:

As a design student, I frequently overhear peers asking questions along the lines of: can I use this image from Google in a poster? Can I use this trial font without buying it for a project? How much do I have to edit an image I find online before I can use it? The goal of this book was to respond to my peers’ musings and begin to answer those questions.

The lovely book is split into three sections—Who Owns This Library? Who Owns This Machine? Who Owns This Image?—and uses seven different paper stocks, color-coded sections, and a typeface chosen specifically for friendliness. Hratko on why the design choices matter:

Copyright law can also be kind of intimidating, so I wanted to use the design of the book to make the content more approachable and engaging.

That’s a long lost art—making essential-but-dry information something people actually want to pick up. The “Who Owns This Machine?” section is especially timely given every AI copyright case working its way through the courts right now.

Grid of colorful spiral-bound book spreads on black background, showing open pages in pink, yellow, blue and green with text and small graphics.

Who Owns This Book? The guide for every designer’s worst nightmare: copyright

Michele Hratko grew up in a library, where she was surrounded by public domain imagery and copyrighted stacks of art and design. Now, she’s laid down a map for contemporary designers who are using found imagery in the age of AI.

itsnicethat.com iconitsnicethat.com

Victor Yocco lays out a UX research playbook for agentic AI in Smashing Magazine—autonomy taxonomy, research methods, metrics, the works. It’s one of the more practical pieces I’ve seen on designing AI that acts on behalf of users.

The autonomy framework is useful. Yocco maps four modes from passive monitoring to full autonomy, and the key insight is that trust isn’t binary:

A user might trust an agent to act autonomously for scheduling, but keep it in “suggestion mode” for financial transactions.

That tracks with how I think about designing AI features. The same user will want different levels of control depending on what’s at stake. Autonomy settings should be per-domain, not global.

On measuring whether it’s working:

For autonomous agents, we measure success by silence. If an agent executes a task and the user does not intervene or reverse the action within a set window, we count that as acceptance.

That’s a different and interesting way to think about design metrics—success as the absence of correction. Yocco pairs this with microsurveys on the undo action so you’re not just counting rollbacks but understanding why they happen.

The cautionary section is worth flagging. Yocco introduces “agentic sludge”—where traditional dark patterns add friction to trap users, agentic sludge removes friction so users agree to things that benefit the business without thinking. Pair that with LLMs that sound authoritative even when wrong, and you have a system that can quietly optimize against the user’s interests. We’ve watched this happen before with social media. The teams that skip the research Yocco describes are the ones most likely to build it again.

Beyond Generative: The Rise Of Agentic AI And User-Centric Design — Smashing Magazine header with author photo and red cat.

Beyond Generative: The Rise Of Agentic AI And User-Centric Design — Smashing Magazine

Developing effective agentic AI requires a new research playbook. When systems plan, decide, and act on our behalf, UX moves beyond usability testing into the realm of trust, consent, and accountability. Victor Yocco outlines the research methods needed to design agentic AI systems responsibly.

smashingmagazine.com iconsmashingmagazine.com

My essay yesterday was about the mechanics of how product design is changing—designing in code, orchestrating AI agents, collapsing the Figma-to-production handoff. That piece got into specifics. This piece by Pavel Bukengolts, writing for UX Magazine, is about the mindset:

AI is changing the how — the tools, the workflows, the speed. But the why of UX? That’s timeless.

Bukengolts is right. UX as a discipline isn’t going anywhere. But I worry that articles like this—well-intentioned and directionally correct—give designers permission to keep doing exactly what they’re doing now. “Sharpen your critical thinking” and “be the conscience in the room” is good advice. It’s also the kind of advice that lets you nod along without changing anything about your Tuesday.

The article lists the skills designers need: critical thinking, systems thinking, AI literacy, ethical awareness, strategic communication. All valid. But none of that addresses what the actual production work looks like six months from now. Bukengolts again:

In a world where AI does the work, your value is knowing why it matters and who it affects.

I agree with this in principle. The problem is the gap between “UX matters” and “your current UX role is secure.” Those are very different statements. UX will absolutely matter in an AI-powered world—someone has to shape the experience, evaluate whether it actually works for people, catch the things the model gets wrong. But the number of people doing that work, and what the job requires of them, is changing fast. I wrote in my essay that junior designers who can’t critically assess AI-generated work will find their roles shrinking fast. The skill floor is rising. Saying “stay curious and principled” isn’t wrong, but it’s not enough.

The piece closes with reassurance:

Yes, this moment is big. Yes, you’ll need to adapt. But no, you are not obsolete.

I’d feel better about that line if the article spent more time on how to adapt—not in terms of thinking skills, but in terms of the actual work. Learn to design in code. Get comfortable directing AI agents. Understand your design system well enough to make it machine-readable. Those are the specific steps that will separate designers who thrive from designers who got the mindset right but missed the shift happening underneath them.

Black 3D letters spelling CHANGE on warm backdrop; caption reads: AI can design interfaces; humans provide empathy and ethics.

Design Smarter: Future-Proof Your UX Career in the Age of AI

Is UX still a thing? AI is rising fast, but UX isn’t disappearing. It’s evolving. The big shift isn’t just tools, it’s how we think: critical thinking to spot gaps, systems thinking to map complexity, and AI literacy to understand capabilities without pretending we build it all. Empathy and ethics become the edge: designers must ask who’s affected, what’s left out, and what unintended consequences might arise. In practice, we translate data and research into a story that matters, bridging users, business, and tech, with strategic communication that keeps everyone aligned. In an AI-powered world, human judgment, why it matters, and to whom, stays central. Stay curious, sharp, and principled.

uxmag.com iconuxmag.com
Close-up of a Frankenstein-like monster face with stitched scars and neck bolts, overlaid by horizontal digital glitch bars

Architects and Monsters

According to recently unsealed court documents, Meta discontinued its internal studies on Facebook’s impact after discovering direct evidence that its platforms were detrimental to users’ mental health.

Jeff Horwitz reporting for Reuters:

In a 2020 research project code-named “Project Mercury,” Meta scientists worked with survey firm Nielsen to gauge the effect of “deactivating” Facebook, according to Meta documents obtained via discovery. To the company’s disappointment, “people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,” internal documents said.

Rather than publishing those findings or pursuing additional research, the filing states, Meta called off further work and internally declared that the negative study findings were tainted by the “existing media narrative” around the company.

Privately, however, a staffer insisted that the conclusions of the research were valid, according to the filing.

As more and more evidence comes to light about Mark Zuckerberg and Meta’s failings and possibly criminal behavior, we as tech workers and specifically designers making technology that billions of people use, have to do better. While my previous essay written after the assassination of Charlie Kirk was an indictment on the algorithm, I’ve come across a couple of pieces recently that bring the responsibility closer to UX’s doorstep.

I wouldn’t call myself a gamer, but I do enjoy good games from time to time, when I have the time. A couple of years ago, I made my way through Hades and had a blast.

But I do know that the publishing of a triple-A title like Call of Duty: Black Ops takes an enormous effort, tons of human-hours, and loads of cash. It’s also obvious to me that AI has been entering into entertainment workflows, just like it has in design workflows.

Ian Dean, writing for Creative Bloq explores this controversy with Activision using generative AI to create artwork for the latest release in the Call of Duty franchise. Players called the company out for being opaque about using AI tools, but more importantly, because they spotted telltale artifacts.

Many of the game’s calling cards display the kind of visual tics that seasoned artists can spot at a glance: fingers that don’t quite add up, characters whose faces drift slightly off-model, and backgrounds that feel too synthetic to belong to a studio known for its polish.

These aren’t high-profile cinematic assets, but they’re the small slices of style and personality players earn through gameplay. And that’s precisely why the discovery has landed so hard; it feels a little sneaky, a bit underhanded.

“Sneaky” and “underhanded” are odd adjectives, no? I suppose gamers are feeling like they’ve been lied to because Activition used AI?

Dean again:

While no major studio will admit it publicly, Black Ops 7 is now a case study in how not to introduce AI into a beloved franchise. Artists across the industry are already discussing how easily ‘supportive tools’ can cross the line into fully generated content, and how difficult it becomes to convince players that craft still matters when the results look rushed or uncanny.

My, possibly controversial, view is that the technology itself isn’t the villain here; poor implementation is, a lack of transparency is, and fundamentally, a lack of creative use is.

I think the last phrase is the key. It’s the loss of quality and lack of creative use.

I’ve been playing around more with AI-generated images and video, ever since Figma acquired Weavy. I’ve been testing out Weavy and have done a lot of experimenting with ComfyUI in recent weeks. The quality of output from these tools is getting better every month.

With more and more AI being embedded into our art and design tools, the purity that some fans want is going to be hard to sustain. I think the train has left the station.

Bearded man in futuristic combat armor holding a rifle, standing before illustrated game UI panels showing fantasy scenes and text

Why Call of Duty: Black Ops 7’s AI art controversy means we all lose

Artists lose jobs, players hate it, and games cost more. I can’t find the benefits.

creativebloq.com iconcreativebloq.com

There are dark patterns in UX, and there are also dark patterns specific to games. Dark Pattern Games is a website that catalogs such patterns and the offending mobile games.

The site’s definition of a dark pattern is:

A gaming dark pattern is something that is deliberately added to a game to cause an unwanted negative experience for the player with a positive outcome for the game developer.

The “Social Pyramid Scheme” is one of my most loathed:

Some games will give you a bonus when you invite your friends to play and link to them to your account. This bonus may be a one-time benefit, or it may be an ongoing benefit that improves the gaming experience for each friend that you add. This gives players a strong incentive to convince their friends to play. Those friends then have to sign up more friends and so on, leading to a pyramid scheme and viral growth for the game.

Starry background with red pixelated text "Dark Pattern Games", a D-pad icon with red arrows, and URL www.darkpattern.games

DarkPattern.games » Healthy Gaming « Avoid Addictive Dark Patterns

Game reviews to help you find good games that don’t trick you into addictive gaming patterns.

darkpattern.games icondarkpattern.games

Pavel Bukengolts writes a piece for UX Magazine that reiterates what I’ve been covering here: our general shift to AI means that human judgement and adaptability are more important than ever.

Before getting to the meat of the issue, Bukengolts highlights the talent crisis that is our own making:

The outcome is a broken pipeline. If graduates cannot land their first jobs, they cannot build the experience needed for the next stage. A decade from now, organizations may face not just a shortage of junior workers, but a shortage of mid-level professionals who never had a chance to develop.

If rote repetitive tasks are being automated by AI and junior staffers aren’t needed for those tasks, then what skills are still valuable? Further on, he answers that question:

Centuries ago, in Athens, Alexandria, or Oxford, education focused on rhetoric, logic, and philosophy. These were not academic luxuries but survival skills for navigating complexity and persuasion. Ironically, they are once again becoming the most durable protection in an age of automation.

Some of these skills include:

  • Logic: Evaluating arguments and identifying flawed reasoning—essential when AI generates plausible but incorrect conclusions.
  • Rhetoric: Crafting persuasive narratives that create emotional connection and resonance beyond what algorithms can achieve.
  • Philosophy and Ethics: Examining not just capability but responsibility, particularly around automation’s broader implications.
  • Systems Thinking: Understanding interconnections and cascading effects that AI’s narrow outputs often miss.
  • Writing: Communicating with precision to align stakeholders and drive better outcomes.
  • Observation: Detecting subtle signals and anomalies that fall outside algorithmic training data.
  • Debate: Refining thinking through intellectual challenge—a practice dating to ancient dialogue.
  • History: Recognizing recurring patterns to avoid cyclical mistakes; AI enthusiasm echoes past technological revolutions.

I would say all of the above not only make a good designer but a good citizen of this planet.

Young worker with hands over their face at a laptop, distressed. Caption: "AI is erasing routine entry-level jobs, pushing young workers to develop deeper human thinking skills to stay relevant.

AI, Early-Career Jobs, and the Return to Thinking

In today’s job market, young professionals are facing unprecedented challenges as entry-level positions vanish, largely due to the rise of artificial intelligence. A recent Stanford study reveals that employment for workers aged 22 to 25 in AI-exposed fields has plummeted by up to 16 percent since late 2022, while older workers see growth. This shift highlights a broken talent pipeline, where routine tasks are easily automated, leaving younger workers without the experience needed to advance. As companies grapple with how to integrate AI, the focus is shifting towards essential human skills like critical thinking, empathy, and creativity — skills that machines can’t replicate. The future of work may depend on how we adapt to this new landscape.

uxmag.com iconuxmag.com

In a heady, intelligent, and fascinating interview with Sarah Jeong from The Verge, Cory Doctorow—the famed internet activist—talks about how platforms have gotten worse over the years. Using Meta (Facebook) as an example, Doctorow explains their decline over time through a multi-stage process. Initially, it attracted users by promising not to spy on them and by showing them content from their friends, leveraging the difficulty of getting friends to switch platforms. Subsequently, Meta compromised user privacy by providing advertisers with surveillance data (aka ad tracking) and offered publishers traffic funnels, locking in business customers before ultimately degrading the experience for all users by filling feeds with paid content and pivoting to less desirable ventures like the Metaverse.

And publishers, [to get visibility on the platform,] they have to put the full text of their articles on Facebook now and no links back to their website.

Otherwise, they won’t be shown to anyone, much less their subscribers, and they’re now fully substitutive, right? And the only way they can monetize that is with Facebook’s rigged ad market and users find that the amount of stuff that they ask to see in their feed is dwindled to basically nothing, so that these voids can be filled with stuff people will pay to show them, and those people are getting ripped off. This is the equilibrium Mark Zuckerberg wants, right? Where all the available value has been withdrawn. But he has to contend with the fact that this is a very brittle equilibrium. The difference between, “I hate Facebook, but I can’t seem to stop using it,” and “I hate Facebook and I’m not going to use it anymore,” is so brittle that if you get a live stream mass shooting or a whistleblower or a privacy scandal like Cambridge Analytica, people will flee.

Enshit-tification cover: title, Cory Doctorow, poop emoji with '&$!#%' censor bar, pixelated poop icons on neon panels.

How Silicon Valley enshittified the internet

Author Cory Doctorow on platform decay and why everything on the internet feels like it’s getting worse.

theverge.com icontheverge.com

Francesca Bria and her collaborators analyzed open-source datasets of “over 250 actors, thousands of verified connections, and $45 billion in documented financial flows” to come up with a single-page website visualizing these relationships to show how money, companies, and political figures connect.

J.D. Vance, propelled to the vice-presidency by $15 million from Peter Thiel, became the face of tech-right governance. Behind him, Thiel’s network moved into the machinery of the state.

Under the banner of “patriotic tech”, this new bloc is building the infrastructure of control—clouds, AI, finance, drones, satellites—an integrated system we call the Authoritarian Stack. It is faster, ideological, and fully privatized: a regime where corporate boards, not public law, set the rules.

Our investigation shows how these firms now operate as state-like powers—writing the rules, winning the tenders, and exporting their model to Europe, where it poses a direct challenge to democratic governance.

Infographic of four dotted circles labeled Legislation, Companies, State, and Kingmakers containing many small colored nodes and tiny profile photos.

The Authoritarian Stack

How Tech Billionaires Are Building a Post-Democratic America — And Why Europe Is Next

authoritarian-stack.info iconauthoritarian-stack.info

I suppose there are two types of souvenirs that we can pick up while traveling: mass-manufactured tchotchkes like fridge magnets or snow globes, or local artisan-made trinkets and wares. The latter has come to represent cultures outside of their locales, an opportunity for tourists to take a little bit of their experiences with them home.

Louisa Eunice writing in Design Observer:

The souvenir industry, though vital for many local economies, has long been accused of flattening cultural complexity into digestible clichés, transforming sacred objects into décor, and replacing sustainable materials with cheaper alternatives to meet demand. Yet for countless artisans, participation in that market remains a practical act of endurance: a way to keep culture visible in a world that might otherwise forget it.

So on the flip side, though these souvenirs reduce the cultures of those places to a carved giraffe, sculpted clay bird, or ceremonial mask, I would argue that at least the artifacts can spark conversation.

The fact that a mask can be both a ritual object for the local artisan who made it and a decorative item for the tourist who bought it says more about resilience than dilution. It reveals how objects can inhabit multiple meanings without losing their essence. What we often call “appropriation” may, in these moments, also be adaptation, a negotiation that allows heritage to stay visible, if altered, in the modern world. 

To understand souvenirs this way is to see them not as hollow tokens but as collaborations: between maker and buyer, local and global, art and economy. When tourists view these pieces as design legacies — works that carry labor, history, and symbolism — the exchange becomes more than commercial. It becomes cultural continuity in motion.

Decorative folding fan painted with women in kimono holding flags (Union Jack, Japanese flag), wooden ribs and dangling tassel on blue background

The afterlife of souvenirs: what survives between culture and commerce?

From carved masks to clay birds, the global souvenir trade tells a deeper story of adaptation, resilience, and cultural survival.

designobserver.com icondesignobserver.com

In just about a year, Bluesky has doubled its userbase from 20 million to 40 million. Last year, it benefitted from “the wake of Donald Trump’s re-election as president, and Elon Musk’s continued degradation of X, Bluesky welcomed an exodus of liberals, leftists, journalists, and academic researchers, among other groups.” Writing in his Platformer newsletter, Casey Newton reflects back on the year, surfacing up the challenges Bluesky has tried to solve in reimagining a more “feel-good feed.”

It’s clear that you can build a nicer online environment than X has; in many ways Bluesky already did. What’s less clear is that you can build a Twitter clone that mostly makes people feel good. For as vital and hilarious as Twitter often was, it also accelerated the polarization of our politics and often left users feeling worse than they did before they opened it.

Bluesky’s ingenuity in reimagining feeds and moderation tools has been a boon to social networks, which have happily adopted some of its best ideas. (You can now find “starter packs” on both Threads and Mastodon.) Ultimately, though, it has the same shape and fundamental dynamics as a place that even its most active users called “the Hellsite.”

Bluesky began by rethinking many core assumptions about social networks. To realize its dream of a feel-good feed, though, it will likely need to rethink several more.

I agree with Newton. I’m not sure that in this day and age, building a friendlier, snark- and toxic-free social media platform is possible. Users are too used to hiding behind keyboards. It’s not only the shitposters but also the online mobs who jump on the anything that might seem out of the norm with whatever community a user might be in.

Newton again:

Nate Silver opened the latest front in the Bluesky debate in September with a post about “Blueskyism,” which he defines as “not a political movement so much as a tribal affiliation, a niche set of attitudes and style of discursive norms that almost seem designed in a lab to be as unappealing as possible to anyone outside the clique.” Its hallmarks, he writes, are aggressively punishing dissent, credentialism, and a dedication to the proposition that we are all currently living through the end of the world.

Mobs, woke or otherwise, silence speech and freeze ideas into orthodoxy.

I miss the pre-Elon Musk Twitter. But I can’t help but think it would have become just as polarized and toxic regardless of Musk transforming it into X.

I think the form of text-based social media from the last 20 years is akin to manufacturing tobacco in the mid-1990s. We know it’s harmful. It may be time to slap a big warning label on these platforms and discourage use.

(Truth be told, I’m on the social networks—see the follow icons in the sidebar—but mainly to give visibility into my work here, though largely unsuccessfully.)

White rounded butterfly-shaped 3D icon with soft shadows centered on a bright blue background.

The Bluesky exodus, one year later

The company has 40 million users and big plans for the future. So why don’t its users seem happy? PLUS: The NEO Home Robot goes viral + Ilya Sutskever’s surprising deposition

platformer.news iconplatformer.news

To close us out on Halloween, here’s one more archive full of spooky UX called the Dark Patterns Hall of Shame. It’s managed by a team of designers and researchers, who have dedicated themselves to identifying and exposing dark patterns and unethical design examples on the internet. More than anything, I just love the names some of these dark patterns have, like Confirmshaming, Privacy Zuckering, and Roach Motel.

Small gold trophy above bold dark text "Hall of shame. design" on a pale beige background.

Collection of Dark Patterns and Unethical Design

Discover a variety of dark pattern examples, sorted by category, to better understand deceptive design practices.

hallofshame.design iconhallofshame.design

Celine Nguyen wrote a piece that connects directly to what Ethan Mollick calls “working with wizards” and what SAP’s Ellie Kemery describes as the “calibration of trust” problem. It’s about how the interfaces we design shape the relationships we have with technology.

The through-line is metaphor. For LLMs, that metaphor is conversation. And it’s working—maybe too well:

Our intense longing to be understood can make even a rudimentary program seem human. This desire predates today’s technologies—and it’s also what makes conversational AI so promising and problematic.

When the metaphor is this good, we forget it’s a metaphor at all:

When we interact with an LLM, we instinctively apply the same expectations that we have for humans: If an LLM offers us incorrect information, or makes something up because it the correct information is unavailable, it is lying to us. …The problem, of course, is that it’s a little incoherent to accuse an LLM of lying. It’s not a person.

We’re so trapped inside the conversational metaphor that we accuse statistical models of having intent, of choosing to deceive. The interface has completely obscured the underlying technology.

Nguyen points to research showing frequent chatbot users “showed consistently worse outcomes” around loneliness and emotional dependence:

Participants who are more likely to feel hurt when accommodating others…showed more problematic AI use, suggesting a potential pathway where individuals turn to AI interactions to avoid the emotional labor required in human relationships.

However, replacing human interaction with AI may only exacerbate their anxiety and vulnerability when facing people.

This isn’t just about individual users making bad choices. It’s about an interface design that encourages those choices by making AI feel like a relationship rather than a tool.

The kicker is that we’ve been here before. In 1964, Joseph Weizenbaum created ELIZA, a simple chatbot that parodied a therapist:

I was startled to see how quickly and how very deeply people conversing with [ELIZA] became emotionally involved with the computer and how unequivocally they anthropomorphized it…What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

Sixty years later, we’ve built vastly more sophisticated systems. But the fundamental problem remains unchanged.

The reality is we’re designing interfaces that make powerful tools feel like people. Susan Kare’s icons for the Macintosh helped millions understand computers. But they didn’t trick people into thinking their computers cared about them.

That’s the difference. And it matters.

Old instant-message window showing "MeowwwitsMadix3: heyyy" and "are you mad at me?" with typed reply "no i think im just kinda embarassed" and buttons Warn, Block, Expressions, Games, Send.

how to speak to a computer

against chat interfaces ✦ a brief history of artificial intelligence ✦ and the (worthwhile) problem of other minds

personalcanon.com iconpersonalcanon.com