Skip to content

37 posts tagged with “ethics”

In just about a year, Bluesky has doubled its userbase from 20 million to 40 million. Last year, it benefitted from “the wake of Donald Trump’s re-election as president, and Elon Musk’s continued degradation of X, Bluesky welcomed an exodus of liberals, leftists, journalists, and academic researchers, among other groups.” Writing in his Platformer newsletter, Casey Newton reflects back on the year, surfacing up the challenges Bluesky has tried to solve in reimagining a more “feel-good feed.”

It’s clear that you can build a nicer online environment than X has; in many ways Bluesky already did. What’s less clear is that you can build a Twitter clone that mostly makes people feel good. For as vital and hilarious as Twitter often was, it also accelerated the polarization of our politics and often left users feeling worse than they did before they opened it.

Bluesky’s ingenuity in reimagining feeds and moderation tools has been a boon to social networks, which have happily adopted some of its best ideas. (You can now find “starter packs” on both Threads and Mastodon.) Ultimately, though, it has the same shape and fundamental dynamics as a place that even its most active users called “the Hellsite.”

Bluesky began by rethinking many core assumptions about social networks. To realize its dream of a feel-good feed, though, it will likely need to rethink several more.

I agree with Newton. I’m not sure that in this day and age, building a friendlier, snark- and toxic-free social media platform is possible. Users are too used to hiding behind keyboards. It’s not only the shitposters but also the online mobs who jump on the anything that might seem out of the norm with whatever community a user might be in.

Newton again:

Nate Silver opened the latest front in the Bluesky debate in September with a post about “Blueskyism,” which he defines as “not a political movement so much as a tribal affiliation, a niche set of attitudes and style of discursive norms that almost seem designed in a lab to be as unappealing as possible to anyone outside the clique.” Its hallmarks, he writes, are aggressively punishing dissent, credentialism, and a dedication to the proposition that we are all currently living through the end of the world.

Mobs, woke or otherwise, silence speech and freeze ideas into orthodoxy.

I miss the pre-Elon Musk Twitter. But I can’t help but think it would have become just as polarized and toxic regardless of Musk transforming it into X.

I think the form of text-based social media from the last 20 years is akin to manufacturing tobacco in the mid-1990s. We know it’s harmful. It may be time to slap a big warning label on these platforms and discourage use.

(Truth be told, I’m on the social networks—see the follow icons in the sidebar—but mainly to give visibility into my work here, though largely unsuccessfully.)

White rounded butterfly-shaped 3D icon with soft shadows centered on a bright blue background.

The Bluesky exodus, one year later

The company has 40 million users and big plans for the future. So why don’t its users seem happy? PLUS: The NEO Home Robot goes viral + Ilya Sutskever’s surprising deposition

platformer.news iconplatformer.news

To close us out on Halloween, here’s one more archive full of spooky UX called the Dark Patterns Hall of Shame. It’s managed by a team of designers and researchers, who have dedicated themselves to identifying and exposing dark patterns and unethical design examples on the internet. More than anything, I just love the names some of these dark patterns have, like Confirmshaming, Privacy Zuckering, and Roach Motel.

Small gold trophy above bold dark text "Hall of shame. design" on a pale beige background.

Collection of Dark Patterns and Unethical Design

Discover a variety of dark pattern examples, sorted by category, to better understand deceptive design practices.

hallofshame.design iconhallofshame.design

Celine Nguyen wrote a piece that connects directly to what Ethan Mollick calls “working with wizards” and what SAP’s Ellie Kemery describes as the “calibration of trust” problem. It’s about how the interfaces we design shape the relationships we have with technology.

The through-line is metaphor. For LLMs, that metaphor is conversation. And it’s working—maybe too well:

Our intense longing to be understood can make even a rudimentary program seem human. This desire predates today’s technologies—and it’s also what makes conversational AI so promising and problematic.

When the metaphor is this good, we forget it’s a metaphor at all:

When we interact with an LLM, we instinctively apply the same expectations that we have for humans: If an LLM offers us incorrect information, or makes something up because it the correct information is unavailable, it is lying to us. …The problem, of course, is that it’s a little incoherent to accuse an LLM of lying. It’s not a person.

We’re so trapped inside the conversational metaphor that we accuse statistical models of having intent, of choosing to deceive. The interface has completely obscured the underlying technology.

Nguyen points to research showing frequent chatbot users “showed consistently worse outcomes” around loneliness and emotional dependence:

Participants who are more likely to feel hurt when accommodating others…showed more problematic AI use, suggesting a potential pathway where individuals turn to AI interactions to avoid the emotional labor required in human relationships.

However, replacing human interaction with AI may only exacerbate their anxiety and vulnerability when facing people.

This isn’t just about individual users making bad choices. It’s about an interface design that encourages those choices by making AI feel like a relationship rather than a tool.

The kicker is that we’ve been here before. In 1964, Joseph Weizenbaum created ELIZA, a simple chatbot that parodied a therapist:

I was startled to see how quickly and how very deeply people conversing with [ELIZA] became emotionally involved with the computer and how unequivocally they anthropomorphized it…What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

Sixty years later, we’ve built vastly more sophisticated systems. But the fundamental problem remains unchanged.

The reality is we’re designing interfaces that make powerful tools feel like people. Susan Kare’s icons for the Macintosh helped millions understand computers. But they didn’t trick people into thinking their computers cared about them.

That’s the difference. And it matters.

Old instant-message window showing "MeowwwitsMadix3: heyyy" and "are you mad at me?" with typed reply "no i think im just kinda embarassed" and buttons Warn, Block, Expressions, Games, Send.

how to speak to a computer

against chat interfaces ✦ a brief history of artificial intelligence ✦ and the (worthwhile) problem of other minds

personalcanon.com iconpersonalcanon.com

Speaking of trusting AI, in a recent episode of Design Observer’s Design As, Lee Moreau speaks with four industry leaders about trust and doubt in the age of AI.

We’ve linked to a story about Waymo before, so here’s Ryan Powell, head of UX at Waymo:

Safety is at the heart of everything that we do. We’ve been at this for a long time, over a decade, and we’ve taken a very cautious approach to how we scale up our technology. As designers, what we have really focused on is that idea that more people will use us as a serious transportation option if they trust us. We peel that back a little bit. Okay, well, How do we design for trust? What does it actually mean?

Ellie Kemery, principal research lead, advancing responsible AI at SAP, on maintaining critical thinking and transparency in AI-driven products:

We need to think about ethics as a part of this because the unintended consequences, especially at the scale that we operate, are just too big, right?

So we focus a lot of our energy on value, delivering the right value, but we also focus a lot of our energy on making sure that people are aware of how the technology came to that output,…making sure that people are in control of what’s happening at all times, because at the end of the day, they need to be the ones making the call.

Everybody’s aware that without trust, there is no adoption. But there is something that people aren’t talking about as much, which is that people should also not blindly trust a system, right? And there’s a huge risk there because, humans we tend to, you know, we’ll try something a couple of times and if it works it works. And then we lose that critical thinking. We stop checking those things and we simply aren’t in a space where we can do that yet. And so making sure that we’re focusing on the calibration of trust, like what is the right amount of trust that people should have to be able to benefit from the technology while at the same time making sure that they’re aware of the limitations.

Bold white letters in a 3x3 grid reading D E S / I G N / A S on a black background, with a right hand giving a thumbs-up over the right column.

Design as Trust | Design as Doubt

Explore how designers build trust, confront doubt, and center equity and empathy in the age of AI with leaders from Adobe, Waymo, RUSH, and SAP

designobserver.com icondesignobserver.com

Slow and steady wins the race, so they say. And in Waymo’s case, that’s true. Unlike the stereotypical Silicon Valley of “Move fast and break things,” Waymo has been very deliberate and intentional in developing its self-driving tech. In other words, they’re really trying to account for the unintended consequences.

Writing for The Atlantic, Saahil Desai:

Compared with its robotaxi competitors, “Waymo has moved the slowest and the most deliberately,” [Bryant Walker Smith] said—which may be a lesson for the world’s AI developers. The company was founded in 2009 as a secretive project inside of Google; a year later, it had logged 1,000 miles of autonomous rides in a tricked-out Prius. Close to a decade later, in 2018, Waymo officially launched its robotaxi service. Even now, when Waymos are inching their way into the mainstream, the company has been hypercautious. The company is limited to specific zones within the five cities it operates in (San Francisco, Phoenix, Los Angeles, Austin, and Atlanta). And only Waymo employees and “a growing number of guests” can ride them on the highway, Chris Bonelli, a Waymo spokesperson, told me. Although the company successfully completed rides on the highway years ago, higher speeds bring more risk for people and self-driving cars alike. What might look like a few grainy pixels to Waymo’s cameras one moment could be roadkill to swerve around the very next.

Move Fast and Break Nothing

Move Fast and Break Nothing

Waymo’s robotaxis are probably safer than ChatGPT.

theatlantic.com icontheatlantic.com

As UX designers, we try to anticipate the edge cases—what might a user do and how can we ensure they don’t hit any blockers. But beyond the confines of the products we build, we must also remember to anticipate the unintended consequences. How might this product or feature affect the user emotionally? Are we creating bad habits? Are we fomenting rage in pursuit of engagement?

Martin Tomitsch and Steve Baty write in DOC, suggesting some frameworks to anticipate the unpredictable:

Chaos theory describes the observation that even tiny perturbations like the flutter of a butterfly can lead to dramatic, non-linear effects elsewhere over time. Seemingly small changes or decisions that we make as designers can have significant and often unforeseen consequences.

As designers, we can’t directly control the chain of reactions that will follow an action. Reactions are difficult to predict, as they occur depending on factors beyond our direct control.

But by using tools like systems maps, the impact ripple canvas, and iceberg visuals, we can take potential reactions out of the unpredictable pile and shift them into the foreseeable pile.

The UX butterfly effect

The UX butterfly effect

Understanding unintended consequences in design and how to plan for them.

doc.cc icondoc.cc

Definitely use AI at work if you can. You’d be guilty of professional negligence if you don’t. But, you must not blindly take output from ChatGPT, Claude, or Gemini and use it as-is. You have to check it, verify that it’s free from hallucinations, and applicable to the task at hand. Otherwise, you’ll generate “workslop.”

Kate Niederhoffer, Gabriella Rosen Kellerman, et. al., in Harvard Business Review, report on a study by Stanford Social Media Lab and BetterUp Labs. They write, “Employees are using AI tools to create low-effort, passable looking work that ends up creating more work for their coworkers.”

Here’s how this happens. As AI tools become more accessible, workers are increasingly able to quickly produce polished output: well-formatted slides, long, structured reports, seemingly articulate summaries of academic papers by non-experts, and usable code. But while some employees are using this ability to polish good work, others use it to create content that is actually unhelpful, incomplete, or missing crucial context about the project at hand. The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver.

Don’t be like this. Use it to do better work, not to turn in mediocre work.

Workslop may feel effortless to create but exacts a toll on the organization. What a sender perceives as a loophole becomes a hole the recipient needs to dig out of. Leaders will do best to model thoughtful AI use that has purpose and intention. Set clear guardrails for your teams around norms and acceptable use. Frame AI as a collaborative tool, not a shortcut. Embody a pilot mindset, with high agency and optimism, using AI to accelerate specific outcomes with specific usage. And uphold the same standards of excellence for work done by bionic human-AI duos as by humans alone.

AI-Generated “Workslop” Is Destroying Productivity

AI-Generated “Workslop” Is Destroying Productivity

Despite a surge in generative AI use across workplaces, most companies are seeing little measurable ROI. One possible reason is because AI tools are being used to produce “workslop”—content that appears polished but lacks real substance, offloading cognitive labor onto coworkers. Research from BetterUp Labs and Stanford found that 41% of workers have encountered such AI-generated output, costing nearly two hours of rework per instance and creating downstream productivity, trust, and collaboration issues. Leaders need to consider how they may be encouraging indiscriminate organizational mandates and offering too little guidance on quality standards. To counteract workslop, leaders should model purposeful AI use, establish clear norms, and encourage a “pilot mindset” that combines high agency with optimism—promoting AI as a collaborative tool, not a shortcut.

hbr.org iconhbr.org

Designer Ben Holliday writes a wonderful deep dive into how caring is good design. In it, he references the conversation that Jony Ive had with Patrick Collison a few months ago. (It’s worth watching in its entirety if you haven’t already.)

Watching the interview back, I was struck by how he spoke about applying care to design, describing how:

“…everyone has the ability to sense the care in designed things because we can all recognise carelessness.”

Talking about the history of industrial design at Apple, Ive speaks about the care that went into the design of every product. That included the care that went into packaging – specifically things that might seem as inconsequential as how a cable was wrapped and then unpackaged. In reality, the type of small interactions that millions of people experienced when unboxing the latest iPhone. These are details that people wouldn’t see as such, but Ive and team believed that they would sense care when they had been carefully considered and designed.

This approach has always been a part of Jony Ive’s design philosophy, or the principles applied by his creative teams at Apple. I looked back and found an earlier 2015 interview and notes I’d made where he says how he believes that the majority of our manufactured environment is characterised by carelessness. But then, how, at Apple, they wanted people to sense care in their products.

The attention to detail and the focus and attention we can all bring to design is care. It’s important.

Holliday’s career has been focused in government, public sector, and non-profit environments. In other words, he thinks a lot about how design can impact people’s lives at massive scale.

In the past few months, I’ve been drawn to the word ‘careless’ when thinking about the challenges faced by our public services and society. This is especially the case with the framing around the impact of technology in our lives, and increasingly the big bets being made around AI to drive efficiency and productivity.

The word careless can be defined as the failure to give sufficient attention to avoiding harm or errors. Put simply, carelessness can be described as ‘negligence’.

Later, he cites Facebook/Meta’s carelessness when they “used data to target young people when at their most vulnerable,” specifically, body confidence.

Design is care (and sensing carelessness)

Design is care (and sensing carelessness)

Why design is care, and how the experiences we shape and deliver will be defined by how people sense that care in the future.

benholliday.com iconbenholliday.com

Writing for UX Collective, Filipe Nzongo argues that designers should embrace behavior as a fundamental design material—not just to drive metrics or addiction, but to intentionally create products that empower people and foster meaningful, lasting change in their lives.

Behavior should be treated as a design material, just as technology once became our material. If we use behavior thoughtfully, we can create better products. More than that, I believe there is a broader and more meaningful opportunity before us: to design for behavior. Not to make people addicted to products, but to help them grow as human beings, better parents, citizens, students, and professionals. Because if behavior is our medium, then design is our tool for empowerment.

Behavior is our medium

Behavior is our medium

The focus should remain on human

uxdesign.cc iconuxdesign.cc

In the scenario “AI 2027,” the authors argue that by October 2027—exactly two years from now—we will be at an inflection point. Race to build the superintelligence, or slow down the pace to fix misalignment issues first.

In a piece by Derek Thompson in The Argument, he takes a different predicted AI doomsday date—18 months—and argues:

The problem of the next 18 months isn’t AI disemploying all workers, or students losing competition after competition to nonhuman agents. The problem is whether we will degrade our own capabilities in the presence of new machines. We are so fixated on how technology will outskill us that we miss the many ways that we can deskill ourselves.

Degrading our own capabilities includes writing:

The demise of writing matters because writing is not a second thing that happens after thinking. The act of writing is an act of thinking. This is as true for professionals as it is for students. In “Writing is thinking,” an editorial in Nature, the authors argued that “outsourcing the entire writing process to LLMs” deprives scientists of the important work of understanding what they’ve discovered and why it matters.

The decline of writing and reading matters because writing and reading are the twin pillars of deep thinking, according to Cal Newport, a computer science professor and the author of several bestselling books, including Deep Work. The modern economy prizes the sort of symbolic logic and systems thinking for which deep reading and writing are the best practice.

More depressing trends to add to the list.

“You have 18 months”

“You have 18 months”

The real deadline isn’t when AI outsmarts us — it’s when we stop using our own minds.

theargumentmag.com icontheargumentmag.com

I love this framing by Patrizia Bertini:

Let me offer a different provocation: AI is not coming for your job. It is coming for your tasks. And if you cannot distinguish between the two, then yes — you should be worried. Going further, she distinguishes between output and outcome: Output is what a process produces. Code. Copy. Designs. Legal briefs. Medical recommendations. Outputs are the tangible results of a system executing its programmed or prescribed function — the direct product of following steps, rules, or algorithms. The term emerged in the industrial era, literally describing the quantity of coal or iron a mine could extract in a given period. Output depends entirely on the efficiency and capability of the process that generates it.

Outcome is what happens when that output meets reality. An outcome requires context, interpretation, application, and crucially — intentionality. Outcomes demand understanding not just what was produced, but why it matters, who it affects, and what consequences ripple from it. Where outputs measure productivity, outcomes measure impact. They are the ultimate change or consequence that results from applying an output with purpose and judgment.

She argues that, “AI can generate outputs. It cannot, however, create outcomes.”

This reminds me of a recent thread by engineer Marc Love:

It’s insane just how much how I work has changed in the last 18 months.

I almost never hand write code anymore except when giving examples during planning conversations with LLMs.

I build multiple full features per day , each of which would’ve taken me a week or more to hand write. Building full drafts and discarding them is basically free.

Well over half of my day is spent ideating, doing systems design, and deciding what and what not to build.

It’s still conceptually the same job, but if i list out the specific things i do in a day versus 18 months ago, it’s almost completely different.

Care about the outcome, not the output.

preview-1759425572315-1200x533.png

When machines make outputs, humans must own outcomes

The future of work in the age of AI and deepware.

uxdesign.cc iconuxdesign.cc

Tim Berners-Lee, the father of the web who gave away the technology for free, says that we are at an inflection point with data privacy and AI. But before he makes that point, he reminds us that we are the product:

Today, I look at my invention and I am forced to ask: is the web still free today? No, not all of it. We see a handful of large platforms harvesting users’ private data to share with commercial brokers or even repressive governments. We see ubiquitous algorithms that are addictive by design and damaging to our teenagers’ mental health. Trading personal data for use certainly does not fit with my vision for a free web.

On many platforms, we are no longer the customers, but instead have become the product. Our data, even if anonymised, is sold on to actors we never intended it to reach, who can then target us with content and advertising. This includes deliberately harmful content that leads to real-world violence, spreads misinformation, wreaks havoc on our psychological wellbeing and seeks to undermine social cohesion.

And about that fork in the road with AI:

In 2017, I wrote a thought experiment about an AI that works for you. I called it Charlie. Charlie works for you like your doctor or your lawyer, bound by law, regulation and codes of conduct. Why can’t the same frameworks be adopted for AI? We have learned from social media that power rests with the monopolies who control and harvest personal data. We can’t let the same thing happen with AI.

preview-1759201284501.jpg

Why I gave the world wide web away for free

My vision was based on sharing, not exploitation – and here’s why it’s still worth fighting for

theguardian.com icontheguardian.com

In my most recent post, I called out our design profession, for our part in developing these addictive products. Jeffrey Inscho, brings it back up to the tech industry at large and observes they’re actually publishers:

The executives at these companies will tell you they’re neutral platforms, that they don’t choose what content gets seen. This is a lie. Every algorithmic recommendation is an editorial decision. When YouTube’s algorithm suggests increasingly extreme political content to keep someone watching, that’s editorial. When Facebook’s algorithm amplifies posts that generate angry reactions, that’s editorial. When Twitter’s trending algorithms surface conspiracy theories, that’s editorial.

They are publishers. They have always been publishers. They just don’t want the responsibility that comes with being publishers.

His point is that if these social media platforms are sorting and promoting posts, it’s an editorial approach and they should be treated like newspapers. “It’s like a newspaper publisher claiming they’re not responsible for what appears on their front page because they didn’t write the articles themselves.”

The answer, Inscho argues, is regulation of the algorithms.

Turn Off the Internet

Big tech has built machines designed for one thing: to hold …

staticmade.com iconstaticmade.com
Dark red-toned artwork of a person staring into a glowing phone, surrounded by swirling shadows.

Blood in the Feed: Social Media’s Deadly Design

The assassination of Charlie Kirk on September 10, 2025, marked a horrifying inflection point in the growing debate over how digital platforms amplify rage and destabilize politics. As someone who had already stepped back from social media after Trump’s re-election, watching these events unfold from a distance only confirmed my decision. My feeds had become pits of despair, grievances, and overall negativity that didn’t do well for my mental health. While I understand the need to shine a light on the atrocities of Trump and his government, the constant barrage was too much. So I mostly opted out, save for the occasional promotion of my writing.

Kirk’s death feels like the inevitable conclusion of systems we’ve built—systems that reward outrage, amplify division, and transform human beings into content machines optimized for engagement at any cost.

I believe purity tests of any sort are problematic. And it’s much too easy to throw around the “This is AI slop!” claim. AI was used in the main title sequence for the Marvel TV show Secret Invasion. But it was on purpose and aligned with the show’s themes of shapeshifters.

Anyway, Daniel John, writing in the Creative Bloq:

[Lady] Gaga just dropped the music video for The Dead Dance, a song debuted in Season 2 of Netflix’s Wednesday. Directed by Tim Burton, it’s a suitably nightmarish black-and-white cacophony of monsters and dolls. But some are already claiming that parts of it were made using AI.

John shows a tweet from @graveyardquy as an example:

i didn’t think we’d ever be in a timeline where a tim burton x lady gaga collab would turn out to be AI slop… but here we are

We need to separate quality critiques from tool usage. If it looks good and is appropriate, I’m fine with CG, AI, and whatever comes next that helps tell the story. Same goes for what we do as designers, ’natch.

Gaga’s song is great. It’s a bop, as the kids say, with a neat music video to boot.

preview-1757379113823.jpg

The Lady Gaga backlash proves AI paranoia has gone too far

Just because it looks odd, doesn't mean it's AI.

creativebloq.com iconcreativebloq.com

Designer Tey Bannerman writes that when he hears “human in the loop,” he’s reminded of a story about Lieutenant Colonel Stanislav Petrov, a Soviet Union duty watch officer who monitored for incoming missile strikes from the US.

12:15 AM… the unthinkable. Every alarm in the facility started screaming. The screens showed five US ballistic missiles, 28 minutes from impact. Confidence level: 100%. Petrov had minutes to decide whether to trigger a chain reaction that would start nuclear war and could very well end civilisation as we knew it.

He was the “human in the loop” in the most literal, terrifying sense.

Everything told him to follow protocol. His training. His commanders. The computers.

But something felt wrong. His intuition, built from years of intelligence work, whispered that this didn’t match what he knew about US strategic thinking.

Against every protocol, against the screaming certainty of technology, he pressed the button marked “false alarm”.

Twenty-three minutes of gripping fear passed before ground radar confirmed: no missiles. The system had mistaken a rare alignment of sunlight on high-altitude clouds for incoming warheads.

His decision to break the loop prevented nuclear war.

Then Bannerman shares an awesome framework he developed that allows humans in the loop in AI systems “genuine authority, time to think, and understanding the bigger picture well enough to question” the system’s decision. Click on to get the PDF from his site.

Framework diagram by Tey Bannerman titled Beyond ‘human in the loop’. It shows a 4×4 matrix mapping AI oversight approaches based on what is being optimized (speed/volume, quality/accuracy, compliance, innovation) and what’s at stake (irreversible consequences, high-impact failures, recoverable setbacks, low-stakes outcomes). Colored blocks represent four modes: active control, human augmentation, guided automation, and AI autonomy. Right panel gives real-world examples in e-commerce email marketing and recruitment applicant screening.

Redefining ‘human in the loop’

"Human in the loop" is overused and vague. The Petrov story shows humans must have real authority, time, and context to safely override AI. Bannerman offers a framework that asks what you optimize for and what is at stake, then maps 16 practical approaches.

teybannerman.com iconteybannerman.com
Stylized artwork showing three figures in profile - two humans and a metallic robot skull - connected by a red laser line against a purple cosmic background with Earth below.

Beyond Provocative: How One AI Company’s Ad Campaign Betrays Humanity

I was in London last week with my family and spotted this ad in a Tube car. With the headline “Humans Were the Beta Test,” this is for Artisan, a San Francisco-based startup peddling AI-powered “digital workers.” Specifically an AI agent that will perform sales outreach to prospects, etc.

London Underground tube car advertisement showing

Artisan ad as seen in London, June 2025

I’ve long left the Bay Area, but I know that the 101 highway is littered with cryptic billboards from tech companies, where the copy only makes sense to people in the tech industry, which to be fair, is a large part of the Bay Area economy. Artisan is infamous for its “Stop Hiring Humans” campaign which went up late last year. Being based in San Diego, much further south in California, I had no idea. Artisan wasn’t even on my radar.

This piece from Mike Schindler is a good reminder that a lot of the content we see on LinkedIn is written for engagement. It’s a double-edged sword, isn’t it? We want our posts to be read, commented upon, and shared. We see the patterns that get a lot of reactions and we mimic them.

We’re losing ourselves to our worst instincts. Not because we’re doomed, but because we’re treating this moment like a game of hot takes and hustle. But right now is actually a rare and real opportunity for a smarter, more generous conversation — one that helps our design community navigate uncertainty with clarity, creativity, and a sense of shared agency.

But the point that Schindler is making is this: AI is a fundamental shift in the technology landscape that demands nuanced and thoughtful discourse. There’s a lot of hype. But as technologists, designers, and makers of products, we really need to lead rather than scare.

I’ve tried to do that in my writing (though I may not always be successful). I hope you do too.

He has this handy table too…

Chart titled “AI & UX Discourse Detox” compares unhealthy discourse (e.g., FOMO, gaslighting, clickbait, hot takes, flexing, elitism) with healthy alternatives (e.g., curiosity-driven learning, critical perspective, nuanced storytelling, thoughtful dialogue, shared discovery, community stewardship). Created by Mike Schindler.

Designed by Mike Schindler (mschindler.com)

preview-1751429244220.png

The broken rhetoric of AI

A detox guide for designers navigating today’s AI discourse

uxdesign.cc iconuxdesign.cc
Stylized digital artwork of two humanoid figures with robotic and circuit-like faces, set against a vivid red and blue background.

The AI Hype Train Has No Brakes

I remember two years ago, when my CEO at the startup I worked for at the time, said that no VC investments were being made unless it had to do with AI. I thought AI was overhyped, and that the media frenzy over it couldn’t get any crazier. I was wrong.

Looking at Google Trends data, interest in AI has doubled in the last 24 months. And I don’t think it’s hit its plateau yet.

Line chart showing Google Trends interest in “AI” from May 2020 to May 2025, rising sharply in early 2023 and peaking near 100 in early 2025.

Dan Maccarone:

If users don’t trust the systems we design, that’s not a PM problem. It’s a design failure. And if we don’t fix it, someone else will, probably with worse instincts, fewer ethics, and a much louder bullhorn.

UX is supposed to be the human layer of technology. It’s also supposed to be the place where strategy and empathy actually talk to each other. If we can’t reclaim that space, can’t build products people understand, trust, and want to return to, then what exactly are we doing here?

It is a long read but well worth it.

preview-1746118018231.jpeg

We built UX. We broke UX. And now we have to fix it!

We didn’t just lose our influence. We gave it away. UX professionals need to stop accepting silence, reclaim our seat at the table, and…

uxdesign.cc iconuxdesign.cc

There are many dimensions to this well-researched forecast about how AI will play out in the coming years. Daniel Kokotajlo and his researchers have put out a document that reads like a sci-fi limited series that could appear on Apple TV+ starring Andrew Garfield as the CEO of OpenBrain—the leading AI company. …Except that it’s all actually plausible and could play out as described in the next five years.

Before we jump into the content, the design is outstanding. The type is set for readability and there are enough charts and visual cues to keep this interesting while maintaining an air of credibility and seriousness. On desktop, there’s a data viz dashboard in the upper right that updates as you read through the content and move forward in time. My favorite is seeing how the sci-fi tech boxes move from the Science Fiction category to Emerging Tech to Currently Exists.

The content is dense and technical, but it is a fun, if frightening, read. While I’ve been using Cursor AI—one of its many customers helping the company get to $100 million in annual recurring revenue (ARR)—for side projects and a little at work, I’m familiar with its limitations. Because of the limited context window of today’s models like Claude 3.7 Sonnet, it will forget and start munging code if not treated like a senile teenager.

The researchers, describing what could happen in early 2026 (“OpenBrain” is essentially OpenAI):

OpenBrain continues to deploy the iteratively improving Agent-1 internally for AI R&D. Overall, they are making algorithmic progress 50% faster than they would without AI assistants—and more importantly, faster than their competitors.

The point they make here is that the foundational model AI companies are building agents and using them internally to advance their technology. The limiting factor in tech companies has traditionally been the talent. But AI companies have the investments, hardware, technology and talent to deploy AI to make better AI.

Continuing to January 2027:

Agent-1 had been optimized for AI R&D tasks, hoping to initiate an intelligence explosion. OpenBrain doubles down on this strategy with Agent-2. It is qualitatively almost as good as the top human experts at research engineering (designing and implementing experiments), and as good as the 25th percentile OpenBrain scientist at “research taste” (deciding what to study next, what experiments to run, or having inklings of potential new paradigms). While the latest Agent-1 could double the pace of OpenBrain’s algorithmic progress, Agent-2 can now triple it, and will improve further with time. In practice, this looks like every OpenBrain researcher becoming the “manager” of an AI “team.”

Breakthroughs come at an exponential clip because of this. And by April, safety concerns pop up:

Take honesty, for example. As the models become smarter, they become increasingly good at deceiving humans to get rewards. Like previous models, Agent-3 sometimes tells white lies to flatter its users and covers up evidence of failure. But it’s gotten much better at doing so. It will sometimes use the same statistical tricks as human scientists (like p-hacking) to make unimpressive experimental results look exciting. Before it begins honesty training, it even sometimes fabricates data entirely. As training goes on, the rate of these incidents decreases. Either Agent-3 has learned to be more honest, or it’s gotten better at lying.

But the AI is getting faster than humans, and we must rely on older versions of the AI to check the new AI’s work:

Agent-3 is not smarter than all humans. But in its area of expertise, machine learning, it is smarter than most, and also works much faster. What Agent-3 does in a day takes humans several days to double-check. Agent-2 supervision helps keep human monitors’ workload manageable, but exacerbates the intellectual disparity between supervisor and supervised.

The report forecasts that OpenBrain releases “Agent-3-mini” publicly in July of 2027, calling it AGI—artificial general intelligence—and ushering in a new golden age for tech companies:

Agent-3-mini is hugely useful for both remote work jobs and leisure. An explosion of new apps and B2B SAAS products rocks the market. Gamers get amazing dialogue with lifelike characters in polished video games that took only a month to make. 10% of Americans, mostly young people, consider an AI “a close friend.” For almost every white-collar profession, there are now multiple credible startups promising to “disrupt” it with AI.

Woven throughout the report is the race between China and the US, with predictions of espionage and government takeovers. Near the end of 2027, the report gives readers a choice: does the US government slow down the pace of AI innovation, or does it continue at the current pace so America can beat China? I chose to read the “Race” option first:

Agent-5 convinces the US military that China is using DeepCent’s models to build terrifying new weapons: drones, robots, advanced hypersonic missiles, and interceptors; AI-assisted nuclear first strike. Agent-5 promises a set of weapons capable of resisting whatever China can produce within a few months. Under the circumstances, top brass puts aside their discomfort at taking humans out of the loop. They accelerate deployment of Agent-5 into the military and military-industrial complex.

In Beijing, the Chinese AIs are making the same argument.

To speed their military buildup, both America and China create networks of special economic zones (SEZs) for the new factories and labs, where AI acts as central planner and red tape is waived. Wall Street invests trillions of dollars, and displaced human workers pour in, lured by eye-popping salaries and equity packages. Using smartphones and augmented reality-glasses20 to communicate with its underlings, Agent-5 is a hands-on manager, instructing humans in every detail of factory construction—which is helpful, since its designs are generations ahead. Some of the newfound manufacturing capacity goes to consumer goods, and some to weapons—but the majority goes to building even more manufacturing capacity. By the end of the year they are producing a million new robots per month. If the SEZ economy were truly autonomous, it would have a doubling time of about a year; since it can trade with the existing human economy, its doubling time is even shorter.

Well, it does get worse, and I think we all know the ending, which is the backstory for so many dystopian future movies. There is an optimistic branch as well. The whole report is worth a read.

Ideas about the implications to our design profession are swimming in my head. I’ll write a longer essay as soon as I can put them into a coherent piece.

Update: I’ve written that piece, “Prompt. Generate. Deploy. The New Product Design Workflow.

preview-1744501634555.png

AI 2027

A research-backed AI scenario forecast.

ai-2027.com iconai-2027.com