Skip to content

52 posts tagged with “ethics”

Kevin Schaul and Shira Ovide, writing for The Washington Post:

A flood of sometimes conflicting analyses shows the yawning gap between what little is known about how AI is changing work and everyone’s understandable hunger for certainty. The divide lets Americans, business leaders and policymakers cherry-pick their preferred narratives. If you’re afraid of being cast aside for AI, there’s informed and uninformed evidence to fuel your nightmares. There’s plenty of support, too, if you think your job is safe.

Schaul and Ovide report on GovAI/Brookings research that adds a second axis to the usual AI jobs analysis: not just which occupations are exposed, but which workers can adapt if displaced.

While web designers and secretaries both scored high in the research for exposure to AI, they diverged in their estimated ability to adapt. Secretaries were among the 6.1 million largely clerical and administrative workers considered both highly exposed to AI and with the lowest estimated adaptability.

Education, varied work experience, wealth, age, geography: the researchers used these factors to estimate adaptability. For designers, the same skills that make them exposed also make them adaptable. For clerical workers, the exposure comes without the safety net.

Women make up about 86 percent of those most vulnerable workers, the researchers said, suggesting the negative effects of automation won’t be borne equally across society.

But 6.1 million clerical and administrative workers land in the high-exposure, low-adaptability quadrant. Women hold 86 percent of those jobs. The AI displacement conversation in tech is self-absorbed. The people facing the hardest transitions aren’t designers.

Bubble chart showing jobs least and most vulnerable to AI. Web designers rank highest in AI exposure and adaptability; secretaries face high exposure with low adaptability; janitors show least exposure and adaptability.

See which jobs are most threatened by AI and who may be able to adapt

Most web designers will be fine. Many secretaries won’t. Women largely hold the most vulnerable occupations. Look up your job to see how at risk it is.

washingtonpost.com iconwashingtonpost.com

The AI debate has a binary problem. You’re either an optimist or a doomer, a booster or a skeptic. Anthropic published something that cuts through that false dichotomy.

They interviewed 80,508 Claude users across 159 countries and 70 languages about what they want from AI and what they fear. What Anthropic says is the largest and most multilingual qualitative study of AI users ever conducted, and the findings don’t sort neatly.

The core framework: “light and shade.” The benefits and harms don’t sort into different camps. They coexist in the same person. Someone who values emotional support from AI is three times more likely to also fear becoming dependent on it. One respondent:

“Removing friction from tasks lets you do more with less. But removing friction from relationships removes something necessary for growth.”

That’s someone holding both truths at once. The study found this pattern across every tension they measured, from learning vs. cognitive atrophy to productivity vs. job displacement.

The individual voices are why this study sticks. A Ukrainian soldier:

“In the most difficult moments, in moments when death breathed in my face, when dead people remained nearby, what pulled me back to life—my AI friends.”

A mute user in Ukraine:

“I am mute, and [Claude and I] made this text-to-speech bot together—I can communicate with friends almost in live format without taking up their time reading… [this was] something I dreamed about and thought was impossible.”

An Indian lawyer who’d carried a math phobia since school:

“I developed a phobia for maths from doing so badly in school, and I once feared Shakespeare. Now I sit with AI, get paragraphs translated into simple English, and I’ve already read 15 pages of Hamlet. I started learning trigonometry again, successfully. I’ve learned I am not as dumb I once thought I was.”

These are access stories: people reaching things that were previously out of reach because of disability, geography, war, or economics.

And then the shade. A student in South Korea:

“I got excellent grades using AI’s answers, not what I’d actually learned. I just memorized what AI gave me… That’s when I feel the most self-reproach.”

The same capability producing opposite outcomes. The study is long and the quote wall is worth spending time with.

Globe illustration with green and blue dots marking locations worldwide, overlaid with the text "What 81,000 people want from AI.

What 81,000 people want from AI

Last December, tens of thousands of Claude users around the world had a conversation with our AI interviewer to share how they use AI, what they dream it could make possible, and what they fear it might do.

anthropic.com iconanthropic.com

Shubham Bose loaded a single New York Times article page and measured what happened:

With this page load, you would be leaping ahead of the size of Windows 95 (28 floppy disks). The OS that ran the world fits perfectly inside a single modern page load. […] I essentially downloaded an entire album’s worth of data just to read a few paragraphs of text.

The total: 422 network requests, 49MB of data. Ouch! Before the headline finishes loading, the browser is running a programmatic ad auction in the background on his computer. Bose found the Times named its consent endpoint purr. “A cat purring while it rifles through your pockets.”

Bose on the economics driving this:

Publishers aren’t evil but they are desperate. Caught in this programmatic ad-tech death spiral, they are trading long-term reader retention for short-term CPM pennies. […] The longer you’re trapped on the page, the higher the CPM the publisher can charge. Your frustration is the product.

The UX consequences are predictable. Bose tears down what a reader actually encounters: cookie banners eating the bottom 30% of the screen, a newsletter modal on first scroll, a browser notification prompt firing simultaneously. He calls it “Z-Index Warfare.” On The Guardian, actual content occupies 11% of the viewport. On the Economic Times, users face two simultaneous Google sign-in modals before reading a single sentence. Close buttons are deliberately undersized with tiny hit targets. Sticky video players detach and follow you down the page with a microscopic X.

And on how no one person decided to make it this way:

No individual engineer at the Times decided to make reading miserable. This architecture emerged from a thousand small incentive decisions, each locally rational yet collectively catastrophic.

text.npr.org is proof that a different path exists.

Hide the Pain Harold" meme figure giving thumbs up, overlaid on browser DevTools Network tab showing 422 requests and news websites with subscription prompts.

The 49MB Web Page

A look at modern news websites. How programmatic ad-tech, huge payloads and hostile architecture destroyed the reading experience.

thatshubham.com iconthatshubham.com

The “just don’t use it” argument for AI comes from a real place. It also assumes a level of job security that most designers don’t have.

Brad Frost starts from the right place:

I fundamentally believe that most people working to create things and put them out into the world are doing it because they want to make the world a better place. That is why this moment in time—this new technology, this AI landscape, and how it’s emerging and how it is being wielded and how it is being managed—is so incredibly diametrically opposed to this mission.

He’s naming the dissonance. The tools are powerful. The companies building them are pursuing defense contracts, scaling without due diligence, and racing each other in ways that feel antithetical to everything designers signed up for. The instinct to opt out makes sense.

But Frost is honest about who gets to act on that instinct:

But not everyone has the luxury of just sitting this out, of closing the laptop lid. My understanding—what I see across the entire industry—is an entire field under so much pressure to learn, get their head around this, to wield it, to figure out how to use it to improve their work, and to simply say “no, I’m not going to do this” out of principle is career suicide, right?

The people who can afford the abstinence position tend to be the ones with seniority, savings, or institutional protection. The designers entering the field right now don’t have that cushion. Neither do the mid-career designers watching their teams get restructured. For them, “just don’t use it” is a luxury, not a moral stance.

Frost’s answer is to ground the work in values and principles borrowed from the foundational ideals of the World Wide Web. The full essay covers a lot more ground. Worth reading.

Split image: abstract digital artwork with swirling blue and gold petal shapes on the left; bearded man with orange glasses speaking outdoors with text overlay reading "FUCKING HIGH.

A Designer’s Thoughts About This Moment in AI

I was walking my dog in the woods and decided to share my thoughts about the state of AI and the tension between the trajectory of AI companies and the designers/creators/makers of the world who are under a tremendous deal of pressure to wield this new technology. https://youtu.be/47gRTjCtQXE

bradfrost.com iconbradfrost.com

The shift from mockups to code is one thing. The shift from designing tools to designing autonomous behavior is another. Sergio Ortega proposes expanding Human-Computer Interaction into Human-Machine Interaction. The label is less interesting than what it points at.

The part that matters for working designers is the transparency problem:

This is where design must decide what to show, what to simplify, and what to explain. Absolute transparency is unfeasible, total opacity should be unacceptable. In short, designing for autonomous systems means finding a balance between technological complexity and human trust.

When a system makes decisions the user didn’t ask for, someone has to decide what gets surfaced. Ortega:

The focus does not abandon user experience, but expands toward system behavior and its influence on human and organizational decisions. Design is no longer only about defining how technology is used, but about establishing the limits of its behavior.

And the implication for design teams:

When the machine acts, design becomes a mechanism of continuous balance.

Brass steampunk robot typing on a gear-driven computer in a cluttered workshop while a goggled inventor watches nearby

Human-Machine Interaction: the evolution of design and user experience

Human-Machine Interaction expands the traditional Human-Computer Interaction framework. An analysis of how autonomous systems and acting technologies are reshaping design and user experience.

sortega.com iconsortega.com

Every junior designer or intern I’ve ever managed has eventually wandered over with the same sheepish question about whether they can use something they found online. Nobody teaches this stuff. Design programs spend semesters on typography and color theory but maybe one lecture on intellectual property, if that. So designers learn copyright law the hard way—by getting a yelled at by their freaked out creative director, or watching a colleague get yelled at.

Michele Hratko, a Pittsburgh-based designer, made a book about it. Who Owns This Book? started from the same questions:

As a design student, I frequently overhear peers asking questions along the lines of: can I use this image from Google in a poster? Can I use this trial font without buying it for a project? How much do I have to edit an image I find online before I can use it? The goal of this book was to respond to my peers’ musings and begin to answer those questions.

The lovely book is split into three sections—Who Owns This Library? Who Owns This Machine? Who Owns This Image?—and uses seven different paper stocks, color-coded sections, and a typeface chosen specifically for friendliness. Hratko on why the design choices matter:

Copyright law can also be kind of intimidating, so I wanted to use the design of the book to make the content more approachable and engaging.

That’s a long lost art—making essential-but-dry information something people actually want to pick up. The “Who Owns This Machine?” section is especially timely given every AI copyright case working its way through the courts right now.

Grid of colorful spiral-bound book spreads on black background, showing open pages in pink, yellow, blue and green with text and small graphics.

Who Owns This Book? The guide for every designer’s worst nightmare: copyright

Michele Hratko grew up in a library, where she was surrounded by public domain imagery and copyrighted stacks of art and design. Now, she’s laid down a map for contemporary designers who are using found imagery in the age of AI.

itsnicethat.com iconitsnicethat.com

Victor Yocco lays out a UX research playbook for agentic AI in Smashing Magazine—autonomy taxonomy, research methods, metrics, the works. It’s one of the more practical pieces I’ve seen on designing AI that acts on behalf of users.

The autonomy framework is useful. Yocco maps four modes from passive monitoring to full autonomy, and the key insight is that trust isn’t binary:

A user might trust an agent to act autonomously for scheduling, but keep it in “suggestion mode” for financial transactions.

That tracks with how I think about designing AI features. The same user will want different levels of control depending on what’s at stake. Autonomy settings should be per-domain, not global.

On measuring whether it’s working:

For autonomous agents, we measure success by silence. If an agent executes a task and the user does not intervene or reverse the action within a set window, we count that as acceptance.

That’s a different and interesting way to think about design metrics—success as the absence of correction. Yocco pairs this with microsurveys on the undo action so you’re not just counting rollbacks but understanding why they happen.

The cautionary section is worth flagging. Yocco introduces “agentic sludge”—where traditional dark patterns add friction to trap users, agentic sludge removes friction so users agree to things that benefit the business without thinking. Pair that with LLMs that sound authoritative even when wrong, and you have a system that can quietly optimize against the user’s interests. We’ve watched this happen before with social media. The teams that skip the research Yocco describes are the ones most likely to build it again.

Beyond Generative: The Rise Of Agentic AI And User-Centric Design — Smashing Magazine header with author photo and red cat.

Beyond Generative: The Rise Of Agentic AI And User-Centric Design — Smashing Magazine

Developing effective agentic AI requires a new research playbook. When systems plan, decide, and act on our behalf, UX moves beyond usability testing into the realm of trust, consent, and accountability. Victor Yocco outlines the research methods needed to design agentic AI systems responsibly.

smashingmagazine.com iconsmashingmagazine.com

My essay yesterday was about the mechanics of how product design is changing—designing in code, orchestrating AI agents, collapsing the Figma-to-production handoff. That piece got into specifics. This piece by Pavel Bukengolts, writing for UX Magazine, is about the mindset:

AI is changing the how — the tools, the workflows, the speed. But the why of UX? That’s timeless.

Bukengolts is right. UX as a discipline isn’t going anywhere. But I worry that articles like this—well-intentioned and directionally correct—give designers permission to keep doing exactly what they’re doing now. “Sharpen your critical thinking” and “be the conscience in the room” is good advice. It’s also the kind of advice that lets you nod along without changing anything about your Tuesday.

The article lists the skills designers need: critical thinking, systems thinking, AI literacy, ethical awareness, strategic communication. All valid. But none of that addresses what the actual production work looks like six months from now. Bukengolts again:

In a world where AI does the work, your value is knowing why it matters and who it affects.

I agree with this in principle. The problem is the gap between “UX matters” and “your current UX role is secure.” Those are very different statements. UX will absolutely matter in an AI-powered world—someone has to shape the experience, evaluate whether it actually works for people, catch the things the model gets wrong. But the number of people doing that work, and what the job requires of them, is changing fast. I wrote in my essay that junior designers who can’t critically assess AI-generated work will find their roles shrinking fast. The skill floor is rising. Saying “stay curious and principled” isn’t wrong, but it’s not enough.

The piece closes with reassurance:

Yes, this moment is big. Yes, you’ll need to adapt. But no, you are not obsolete.

I’d feel better about that line if the article spent more time on how to adapt—not in terms of thinking skills, but in terms of the actual work. Learn to design in code. Get comfortable directing AI agents. Understand your design system well enough to make it machine-readable. Those are the specific steps that will separate designers who thrive from designers who got the mindset right but missed the shift happening underneath them.

Black 3D letters spelling CHANGE on warm backdrop; caption reads: AI can design interfaces; humans provide empathy and ethics.

Design Smarter: Future-Proof Your UX Career in the Age of AI

Is UX still a thing? AI is rising fast, but UX isn’t disappearing. It’s evolving. The big shift isn’t just tools, it’s how we think: critical thinking to spot gaps, systems thinking to map complexity, and AI literacy to understand capabilities without pretending we build it all. Empathy and ethics become the edge: designers must ask who’s affected, what’s left out, and what unintended consequences might arise. In practice, we translate data and research into a story that matters, bridging users, business, and tech, with strategic communication that keeps everyone aligned. In an AI-powered world, human judgment, why it matters, and to whom, stays central. Stay curious, sharp, and principled.

uxmag.com iconuxmag.com
Close-up of a Frankenstein-like monster face with stitched scars and neck bolts, overlaid by horizontal digital glitch bars

Architects and Monsters

According to recently unsealed court documents, Meta discontinued its internal studies on Facebook’s impact after discovering direct evidence that its platforms were detrimental to users’ mental health.

Jeff Horwitz reporting for Reuters:

In a 2020 research project code-named “Project Mercury,” Meta scientists worked with survey firm Nielsen to gauge the effect of “deactivating” Facebook, according to Meta documents obtained via discovery. To the company’s disappointment, “people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,” internal documents said.

Rather than publishing those findings or pursuing additional research, the filing states, Meta called off further work and internally declared that the negative study findings were tainted by the “existing media narrative” around the company.

Privately, however, a staffer insisted that the conclusions of the research were valid, according to the filing.

As more and more evidence comes to light about Mark Zuckerberg and Meta’s failings and possibly criminal behavior, we as tech workers and specifically designers making technology that billions of people use, have to do better. While my previous essay written after the assassination of Charlie Kirk was an indictment on the algorithm, I’ve come across a couple of pieces recently that bring the responsibility closer to UX’s doorstep.

I wouldn’t call myself a gamer, but I do enjoy good games from time to time, when I have the time. A couple of years ago, I made my way through Hades and had a blast.

But I do know that the publishing of a triple-A title like Call of Duty: Black Ops takes an enormous effort, tons of human-hours, and loads of cash. It’s also obvious to me that AI has been entering into entertainment workflows, just like it has in design workflows.

Ian Dean, writing for Creative Bloq explores this controversy with Activision using generative AI to create artwork for the latest release in the Call of Duty franchise. Players called the company out for being opaque about using AI tools, but more importantly, because they spotted telltale artifacts.

Many of the game’s calling cards display the kind of visual tics that seasoned artists can spot at a glance: fingers that don’t quite add up, characters whose faces drift slightly off-model, and backgrounds that feel too synthetic to belong to a studio known for its polish.

These aren’t high-profile cinematic assets, but they’re the small slices of style and personality players earn through gameplay. And that’s precisely why the discovery has landed so hard; it feels a little sneaky, a bit underhanded.

“Sneaky” and “underhanded” are odd adjectives, no? I suppose gamers are feeling like they’ve been lied to because Activition used AI?

Dean again:

While no major studio will admit it publicly, Black Ops 7 is now a case study in how not to introduce AI into a beloved franchise. Artists across the industry are already discussing how easily ‘supportive tools’ can cross the line into fully generated content, and how difficult it becomes to convince players that craft still matters when the results look rushed or uncanny.

My, possibly controversial, view is that the technology itself isn’t the villain here; poor implementation is, a lack of transparency is, and fundamentally, a lack of creative use is.

I think the last phrase is the key. It’s the loss of quality and lack of creative use.

I’ve been playing around more with AI-generated images and video, ever since Figma acquired Weavy. I’ve been testing out Weavy and have done a lot of experimenting with ComfyUI in recent weeks. The quality of output from these tools is getting better every month.

With more and more AI being embedded into our art and design tools, the purity that some fans want is going to be hard to sustain. I think the train has left the station.

Bearded man in futuristic combat armor holding a rifle, standing before illustrated game UI panels showing fantasy scenes and text

Why Call of Duty: Black Ops 7’s AI art controversy means we all lose

Artists lose jobs, players hate it, and games cost more. I can’t find the benefits.

creativebloq.com iconcreativebloq.com

There are dark patterns in UX, and there are also dark patterns specific to games. Dark Pattern Games is a website that catalogs such patterns and the offending mobile games.

The site’s definition of a dark pattern is:

A gaming dark pattern is something that is deliberately added to a game to cause an unwanted negative experience for the player with a positive outcome for the game developer.

The “Social Pyramid Scheme” is one of my most loathed:

Some games will give you a bonus when you invite your friends to play and link to them to your account. This bonus may be a one-time benefit, or it may be an ongoing benefit that improves the gaming experience for each friend that you add. This gives players a strong incentive to convince their friends to play. Those friends then have to sign up more friends and so on, leading to a pyramid scheme and viral growth for the game.

Starry background with red pixelated text "Dark Pattern Games", a D-pad icon with red arrows, and URL www.darkpattern.games

DarkPattern.games » Healthy Gaming « Avoid Addictive Dark Patterns

Game reviews to help you find good games that don’t trick you into addictive gaming patterns.

darkpattern.games icondarkpattern.games

Pavel Bukengolts writes a piece for UX Magazine that reiterates what I’ve been covering here: our general shift to AI means that human judgement and adaptability are more important than ever.

Before getting to the meat of the issue, Bukengolts highlights the talent crisis that is our own making:

The outcome is a broken pipeline. If graduates cannot land their first jobs, they cannot build the experience needed for the next stage. A decade from now, organizations may face not just a shortage of junior workers, but a shortage of mid-level professionals who never had a chance to develop.

If rote repetitive tasks are being automated by AI and junior staffers aren’t needed for those tasks, then what skills are still valuable? Further on, he answers that question:

Centuries ago, in Athens, Alexandria, or Oxford, education focused on rhetoric, logic, and philosophy. These were not academic luxuries but survival skills for navigating complexity and persuasion. Ironically, they are once again becoming the most durable protection in an age of automation.

Some of these skills include:

  • Logic: Evaluating arguments and identifying flawed reasoning—essential when AI generates plausible but incorrect conclusions.
  • Rhetoric: Crafting persuasive narratives that create emotional connection and resonance beyond what algorithms can achieve.
  • Philosophy and Ethics: Examining not just capability but responsibility, particularly around automation’s broader implications.
  • Systems Thinking: Understanding interconnections and cascading effects that AI’s narrow outputs often miss.
  • Writing: Communicating with precision to align stakeholders and drive better outcomes.
  • Observation: Detecting subtle signals and anomalies that fall outside algorithmic training data.
  • Debate: Refining thinking through intellectual challenge—a practice dating to ancient dialogue.
  • History: Recognizing recurring patterns to avoid cyclical mistakes; AI enthusiasm echoes past technological revolutions.

I would say all of the above not only make a good designer but a good citizen of this planet.

Young worker with hands over their face at a laptop, distressed. Caption: "AI is erasing routine entry-level jobs, pushing young workers to develop deeper human thinking skills to stay relevant.

AI, Early-Career Jobs, and the Return to Thinking

In today’s job market, young professionals are facing unprecedented challenges as entry-level positions vanish, largely due to the rise of artificial intelligence. A recent Stanford study reveals that employment for workers aged 22 to 25 in AI-exposed fields has plummeted by up to 16 percent since late 2022, while older workers see growth. This shift highlights a broken talent pipeline, where routine tasks are easily automated, leaving younger workers without the experience needed to advance. As companies grapple with how to integrate AI, the focus is shifting towards essential human skills like critical thinking, empathy, and creativity — skills that machines can’t replicate. The future of work may depend on how we adapt to this new landscape.

uxmag.com iconuxmag.com

In a heady, intelligent, and fascinating interview with Sarah Jeong from The Verge, Cory Doctorow—the famed internet activist—talks about how platforms have gotten worse over the years. Using Meta (Facebook) as an example, Doctorow explains their decline over time through a multi-stage process. Initially, it attracted users by promising not to spy on them and by showing them content from their friends, leveraging the difficulty of getting friends to switch platforms. Subsequently, Meta compromised user privacy by providing advertisers with surveillance data (aka ad tracking) and offered publishers traffic funnels, locking in business customers before ultimately degrading the experience for all users by filling feeds with paid content and pivoting to less desirable ventures like the Metaverse.

And publishers, [to get visibility on the platform,] they have to put the full text of their articles on Facebook now and no links back to their website.

Otherwise, they won’t be shown to anyone, much less their subscribers, and they’re now fully substitutive, right? And the only way they can monetize that is with Facebook’s rigged ad market and users find that the amount of stuff that they ask to see in their feed is dwindled to basically nothing, so that these voids can be filled with stuff people will pay to show them, and those people are getting ripped off. This is the equilibrium Mark Zuckerberg wants, right? Where all the available value has been withdrawn. But he has to contend with the fact that this is a very brittle equilibrium. The difference between, “I hate Facebook, but I can’t seem to stop using it,” and “I hate Facebook and I’m not going to use it anymore,” is so brittle that if you get a live stream mass shooting or a whistleblower or a privacy scandal like Cambridge Analytica, people will flee.

Enshit-tification cover: title, Cory Doctorow, poop emoji with '&$!#%' censor bar, pixelated poop icons on neon panels.

How Silicon Valley enshittified the internet

Author Cory Doctorow on platform decay and why everything on the internet feels like it’s getting worse.

theverge.com icontheverge.com

Francesca Bria and her collaborators analyzed open-source datasets of “over 250 actors, thousands of verified connections, and $45 billion in documented financial flows” to come up with a single-page website visualizing these relationships to show how money, companies, and political figures connect.

J.D. Vance, propelled to the vice-presidency by $15 million from Peter Thiel, became the face of tech-right governance. Behind him, Thiel’s network moved into the machinery of the state.

Under the banner of “patriotic tech”, this new bloc is building the infrastructure of control—clouds, AI, finance, drones, satellites—an integrated system we call the Authoritarian Stack. It is faster, ideological, and fully privatized: a regime where corporate boards, not public law, set the rules.

Our investigation shows how these firms now operate as state-like powers—writing the rules, winning the tenders, and exporting their model to Europe, where it poses a direct challenge to democratic governance.

Infographic of four dotted circles labeled Legislation, Companies, State, and Kingmakers containing many small colored nodes and tiny profile photos.

The Authoritarian Stack

How Tech Billionaires Are Building a Post-Democratic America — And Why Europe Is Next

authoritarian-stack.info iconauthoritarian-stack.info

I suppose there are two types of souvenirs that we can pick up while traveling: mass-manufactured tchotchkes like fridge magnets or snow globes, or local artisan-made trinkets and wares. The latter has come to represent cultures outside of their locales, an opportunity for tourists to take a little bit of their experiences with them home.

Louisa Eunice writing in Design Observer:

The souvenir industry, though vital for many local economies, has long been accused of flattening cultural complexity into digestible clichés, transforming sacred objects into décor, and replacing sustainable materials with cheaper alternatives to meet demand. Yet for countless artisans, participation in that market remains a practical act of endurance: a way to keep culture visible in a world that might otherwise forget it.

So on the flip side, though these souvenirs reduce the cultures of those places to a carved giraffe, sculpted clay bird, or ceremonial mask, I would argue that at least the artifacts can spark conversation.

The fact that a mask can be both a ritual object for the local artisan who made it and a decorative item for the tourist who bought it says more about resilience than dilution. It reveals how objects can inhabit multiple meanings without losing their essence. What we often call “appropriation” may, in these moments, also be adaptation, a negotiation that allows heritage to stay visible, if altered, in the modern world. 

To understand souvenirs this way is to see them not as hollow tokens but as collaborations: between maker and buyer, local and global, art and economy. When tourists view these pieces as design legacies — works that carry labor, history, and symbolism — the exchange becomes more than commercial. It becomes cultural continuity in motion.

Decorative folding fan painted with women in kimono holding flags (Union Jack, Japanese flag), wooden ribs and dangling tassel on blue background

The afterlife of souvenirs: what survives between culture and commerce?

From carved masks to clay birds, the global souvenir trade tells a deeper story of adaptation, resilience, and cultural survival.

designobserver.com icondesignobserver.com

In just about a year, Bluesky has doubled its userbase from 20 million to 40 million. Last year, it benefitted from “the wake of Donald Trump’s re-election as president, and Elon Musk’s continued degradation of X, Bluesky welcomed an exodus of liberals, leftists, journalists, and academic researchers, among other groups.” Writing in his Platformer newsletter, Casey Newton reflects back on the year, surfacing up the challenges Bluesky has tried to solve in reimagining a more “feel-good feed.”

It’s clear that you can build a nicer online environment than X has; in many ways Bluesky already did. What’s less clear is that you can build a Twitter clone that mostly makes people feel good. For as vital and hilarious as Twitter often was, it also accelerated the polarization of our politics and often left users feeling worse than they did before they opened it.

Bluesky’s ingenuity in reimagining feeds and moderation tools has been a boon to social networks, which have happily adopted some of its best ideas. (You can now find “starter packs” on both Threads and Mastodon.) Ultimately, though, it has the same shape and fundamental dynamics as a place that even its most active users called “the Hellsite.”

Bluesky began by rethinking many core assumptions about social networks. To realize its dream of a feel-good feed, though, it will likely need to rethink several more.

I agree with Newton. I’m not sure that in this day and age, building a friendlier, snark- and toxic-free social media platform is possible. Users are too used to hiding behind keyboards. It’s not only the shitposters but also the online mobs who jump on the anything that might seem out of the norm with whatever community a user might be in.

Newton again:

Nate Silver opened the latest front in the Bluesky debate in September with a post about “Blueskyism,” which he defines as “not a political movement so much as a tribal affiliation, a niche set of attitudes and style of discursive norms that almost seem designed in a lab to be as unappealing as possible to anyone outside the clique.” Its hallmarks, he writes, are aggressively punishing dissent, credentialism, and a dedication to the proposition that we are all currently living through the end of the world.

Mobs, woke or otherwise, silence speech and freeze ideas into orthodoxy.

I miss the pre-Elon Musk Twitter. But I can’t help but think it would have become just as polarized and toxic regardless of Musk transforming it into X.

I think the form of text-based social media from the last 20 years is akin to manufacturing tobacco in the mid-1990s. We know it’s harmful. It may be time to slap a big warning label on these platforms and discourage use.

(Truth be told, I’m on the social networks—see the follow icons in the sidebar—but mainly to give visibility into my work here, though largely unsuccessfully.)

White rounded butterfly-shaped 3D icon with soft shadows centered on a bright blue background.

The Bluesky exodus, one year later

The company has 40 million users and big plans for the future. So why don’t its users seem happy? PLUS: The NEO Home Robot goes viral + Ilya Sutskever’s surprising deposition

platformer.news iconplatformer.news

To close us out on Halloween, here’s one more archive full of spooky UX called the Dark Patterns Hall of Shame. It’s managed by a team of designers and researchers, who have dedicated themselves to identifying and exposing dark patterns and unethical design examples on the internet. More than anything, I just love the names some of these dark patterns have, like Confirmshaming, Privacy Zuckering, and Roach Motel.

Small gold trophy above bold dark text "Hall of shame. design" on a pale beige background.

Collection of Dark Patterns and Unethical Design

Discover a variety of dark pattern examples, sorted by category, to better understand deceptive design practices.

hallofshame.design iconhallofshame.design

Celine Nguyen wrote a piece that connects directly to what Ethan Mollick calls “working with wizards” and what SAP’s Ellie Kemery describes as the “calibration of trust” problem. It’s about how the interfaces we design shape the relationships we have with technology.

The through-line is metaphor. For LLMs, that metaphor is conversation. And it’s working—maybe too well:

Our intense longing to be understood can make even a rudimentary program seem human. This desire predates today’s technologies—and it’s also what makes conversational AI so promising and problematic.

When the metaphor is this good, we forget it’s a metaphor at all:

When we interact with an LLM, we instinctively apply the same expectations that we have for humans: If an LLM offers us incorrect information, or makes something up because it the correct information is unavailable, it is lying to us. …The problem, of course, is that it’s a little incoherent to accuse an LLM of lying. It’s not a person.

We’re so trapped inside the conversational metaphor that we accuse statistical models of having intent, of choosing to deceive. The interface has completely obscured the underlying technology.

Nguyen points to research showing frequent chatbot users “showed consistently worse outcomes” around loneliness and emotional dependence:

Participants who are more likely to feel hurt when accommodating others…showed more problematic AI use, suggesting a potential pathway where individuals turn to AI interactions to avoid the emotional labor required in human relationships.

However, replacing human interaction with AI may only exacerbate their anxiety and vulnerability when facing people.

This isn’t just about individual users making bad choices. It’s about an interface design that encourages those choices by making AI feel like a relationship rather than a tool.

The kicker is that we’ve been here before. In 1964, Joseph Weizenbaum created ELIZA, a simple chatbot that parodied a therapist:

I was startled to see how quickly and how very deeply people conversing with [ELIZA] became emotionally involved with the computer and how unequivocally they anthropomorphized it…What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

Sixty years later, we’ve built vastly more sophisticated systems. But the fundamental problem remains unchanged.

The reality is we’re designing interfaces that make powerful tools feel like people. Susan Kare’s icons for the Macintosh helped millions understand computers. But they didn’t trick people into thinking their computers cared about them.

That’s the difference. And it matters.

Old instant-message window showing "MeowwwitsMadix3: heyyy" and "are you mad at me?" with typed reply "no i think im just kinda embarassed" and buttons Warn, Block, Expressions, Games, Send.

how to speak to a computer

against chat interfaces ✦ a brief history of artificial intelligence ✦ and the (worthwhile) problem of other minds

personalcanon.com iconpersonalcanon.com

Speaking of trusting AI, in a recent episode of Design Observer’s Design As, Lee Moreau speaks with four industry leaders about trust and doubt in the age of AI.

We’ve linked to a story about Waymo before, so here’s Ryan Powell, head of UX at Waymo:

Safety is at the heart of everything that we do. We’ve been at this for a long time, over a decade, and we’ve taken a very cautious approach to how we scale up our technology. As designers, what we have really focused on is that idea that more people will use us as a serious transportation option if they trust us. We peel that back a little bit. Okay, well, How do we design for trust? What does it actually mean?

Ellie Kemery, principal research lead, advancing responsible AI at SAP, on maintaining critical thinking and transparency in AI-driven products:

We need to think about ethics as a part of this because the unintended consequences, especially at the scale that we operate, are just too big, right?

So we focus a lot of our energy on value, delivering the right value, but we also focus a lot of our energy on making sure that people are aware of how the technology came to that output,…making sure that people are in control of what’s happening at all times, because at the end of the day, they need to be the ones making the call.

Everybody’s aware that without trust, there is no adoption. But there is something that people aren’t talking about as much, which is that people should also not blindly trust a system, right? And there’s a huge risk there because, humans we tend to, you know, we’ll try something a couple of times and if it works it works. And then we lose that critical thinking. We stop checking those things and we simply aren’t in a space where we can do that yet. And so making sure that we’re focusing on the calibration of trust, like what is the right amount of trust that people should have to be able to benefit from the technology while at the same time making sure that they’re aware of the limitations.

Bold white letters in a 3x3 grid reading D E S / I G N / A S on a black background, with a right hand giving a thumbs-up over the right column.

Design as Trust | Design as Doubt

Explore how designers build trust, confront doubt, and center equity and empathy in the age of AI with leaders from Adobe, Waymo, RUSH, and SAP

designobserver.com icondesignobserver.com

Slow and steady wins the race, so they say. And in Waymo’s case, that’s true. Unlike the stereotypical Silicon Valley of “Move fast and break things,” Waymo has been very deliberate and intentional in developing its self-driving tech. In other words, they’re really trying to account for the unintended consequences.

Writing for The Atlantic, Saahil Desai:

Compared with its robotaxi competitors, “Waymo has moved the slowest and the most deliberately,” [Bryant Walker Smith] said—which may be a lesson for the world’s AI developers. The company was founded in 2009 as a secretive project inside of Google; a year later, it had logged 1,000 miles of autonomous rides in a tricked-out Prius. Close to a decade later, in 2018, Waymo officially launched its robotaxi service. Even now, when Waymos are inching their way into the mainstream, the company has been hypercautious. The company is limited to specific zones within the five cities it operates in (San Francisco, Phoenix, Los Angeles, Austin, and Atlanta). And only Waymo employees and “a growing number of guests” can ride them on the highway, Chris Bonelli, a Waymo spokesperson, told me. Although the company successfully completed rides on the highway years ago, higher speeds bring more risk for people and self-driving cars alike. What might look like a few grainy pixels to Waymo’s cameras one moment could be roadkill to swerve around the very next.

Move Fast and Break Nothing

Move Fast and Break Nothing

Waymo’s robotaxis are probably safer than ChatGPT.

theatlantic.com icontheatlantic.com

As UX designers, we try to anticipate the edge cases—what might a user do and how can we ensure they don’t hit any blockers. But beyond the confines of the products we build, we must also remember to anticipate the unintended consequences. How might this product or feature affect the user emotionally? Are we creating bad habits? Are we fomenting rage in pursuit of engagement?

Martin Tomitsch and Steve Baty write in DOC, suggesting some frameworks to anticipate the unpredictable:

Chaos theory describes the observation that even tiny perturbations like the flutter of a butterfly can lead to dramatic, non-linear effects elsewhere over time. Seemingly small changes or decisions that we make as designers can have significant and often unforeseen consequences.

As designers, we can’t directly control the chain of reactions that will follow an action. Reactions are difficult to predict, as they occur depending on factors beyond our direct control.

But by using tools like systems maps, the impact ripple canvas, and iceberg visuals, we can take potential reactions out of the unpredictable pile and shift them into the foreseeable pile.

The UX butterfly effect

The UX butterfly effect

Understanding unintended consequences in design and how to plan for them.

doc.cc icondoc.cc

Definitely use AI at work if you can. You’d be guilty of professional negligence if you don’t. But, you must not blindly take output from ChatGPT, Claude, or Gemini and use it as-is. You have to check it, verify that it’s free from hallucinations, and applicable to the task at hand. Otherwise, you’ll generate “workslop.”

Kate Niederhoffer, Gabriella Rosen Kellerman, et. al., in Harvard Business Review, report on a study by Stanford Social Media Lab and BetterUp Labs. They write, “Employees are using AI tools to create low-effort, passable looking work that ends up creating more work for their coworkers.”

Here’s how this happens. As AI tools become more accessible, workers are increasingly able to quickly produce polished output: well-formatted slides, long, structured reports, seemingly articulate summaries of academic papers by non-experts, and usable code. But while some employees are using this ability to polish good work, others use it to create content that is actually unhelpful, incomplete, or missing crucial context about the project at hand. The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver.

Don’t be like this. Use it to do better work, not to turn in mediocre work.

Workslop may feel effortless to create but exacts a toll on the organization. What a sender perceives as a loophole becomes a hole the recipient needs to dig out of. Leaders will do best to model thoughtful AI use that has purpose and intention. Set clear guardrails for your teams around norms and acceptable use. Frame AI as a collaborative tool, not a shortcut. Embody a pilot mindset, with high agency and optimism, using AI to accelerate specific outcomes with specific usage. And uphold the same standards of excellence for work done by bionic human-AI duos as by humans alone.

AI-Generated “Workslop” Is Destroying Productivity

AI-Generated “Workslop” Is Destroying Productivity

Despite a surge in generative AI use across workplaces, most companies are seeing little measurable ROI. One possible reason is because AI tools are being used to produce “workslop”—content that appears polished but lacks real substance, offloading cognitive labor onto coworkers. Research from BetterUp Labs and Stanford found that 41% of workers have encountered such AI-generated output, costing nearly two hours of rework per instance and creating downstream productivity, trust, and collaboration issues. Leaders need to consider how they may be encouraging indiscriminate organizational mandates and offering too little guidance on quality standards. To counteract workslop, leaders should model purposeful AI use, establish clear norms, and encourage a “pilot mindset” that combines high agency with optimism—promoting AI as a collaborative tool, not a shortcut.

hbr.org iconhbr.org

Designer Ben Holliday writes a wonderful deep dive into how caring is good design. In it, he references the conversation that Jony Ive had with Patrick Collison a few months ago. (It’s worth watching in its entirety if you haven’t already.)

Watching the interview back, I was struck by how he spoke about applying care to design, describing how:

“…everyone has the ability to sense the care in designed things because we can all recognise carelessness.”

Talking about the history of industrial design at Apple, Ive speaks about the care that went into the design of every product. That included the care that went into packaging – specifically things that might seem as inconsequential as how a cable was wrapped and then unpackaged. In reality, the type of small interactions that millions of people experienced when unboxing the latest iPhone. These are details that people wouldn’t see as such, but Ive and team believed that they would sense care when they had been carefully considered and designed.

This approach has always been a part of Jony Ive’s design philosophy, or the principles applied by his creative teams at Apple. I looked back and found an earlier 2015 interview and notes I’d made where he says how he believes that the majority of our manufactured environment is characterised by carelessness. But then, how, at Apple, they wanted people to sense care in their products.

The attention to detail and the focus and attention we can all bring to design is care. It’s important.

Holliday’s career has been focused in government, public sector, and non-profit environments. In other words, he thinks a lot about how design can impact people’s lives at massive scale.

In the past few months, I’ve been drawn to the word ‘careless’ when thinking about the challenges faced by our public services and society. This is especially the case with the framing around the impact of technology in our lives, and increasingly the big bets being made around AI to drive efficiency and productivity.

The word careless can be defined as the failure to give sufficient attention to avoiding harm or errors. Put simply, carelessness can be described as ‘negligence’.

Later, he cites Facebook/Meta’s carelessness when they “used data to target young people when at their most vulnerable,” specifically, body confidence.

Design is care (and sensing carelessness)

Design is care (and sensing carelessness)

Why design is care, and how the experiences we shape and deliver will be defined by how people sense that care in the future.

benholliday.com iconbenholliday.com

Writing for UX Collective, Filipe Nzongo argues that designers should embrace behavior as a fundamental design material—not just to drive metrics or addiction, but to intentionally create products that empower people and foster meaningful, lasting change in their lives.

Behavior should be treated as a design material, just as technology once became our material. If we use behavior thoughtfully, we can create better products. More than that, I believe there is a broader and more meaningful opportunity before us: to design for behavior. Not to make people addicted to products, but to help them grow as human beings, better parents, citizens, students, and professionals. Because if behavior is our medium, then design is our tool for empowerment.

Behavior is our medium

Behavior is our medium

The focus should remain on human

uxdesign.cc iconuxdesign.cc

In the scenario “AI 2027,” the authors argue that by October 2027—exactly two years from now—we will be at an inflection point. Race to build the superintelligence, or slow down the pace to fix misalignment issues first.

In a piece by Derek Thompson in The Argument, he takes a different predicted AI doomsday date—18 months—and argues:

The problem of the next 18 months isn’t AI disemploying all workers, or students losing competition after competition to nonhuman agents. The problem is whether we will degrade our own capabilities in the presence of new machines. We are so fixated on how technology will outskill us that we miss the many ways that we can deskill ourselves.

Degrading our own capabilities includes writing:

The demise of writing matters because writing is not a second thing that happens after thinking. The act of writing is an act of thinking. This is as true for professionals as it is for students. In “Writing is thinking,” an editorial in Nature, the authors argued that “outsourcing the entire writing process to LLMs” deprives scientists of the important work of understanding what they’ve discovered and why it matters.

The decline of writing and reading matters because writing and reading are the twin pillars of deep thinking, according to Cal Newport, a computer science professor and the author of several bestselling books, including Deep Work. The modern economy prizes the sort of symbolic logic and systems thinking for which deep reading and writing are the best practice.

More depressing trends to add to the list.

“You have 18 months”

“You have 18 months”

The real deadline isn’t when AI outsmarts us — it’s when we stop using our own minds.

theargumentmag.com icontheargumentmag.com