Skip to content

30 posts tagged with “ethics”

Designer Ben Holliday writes a wonderful deep dive into how caring is good design. In it, he references the conversation that Jony Ive had with Patrick Collison a few months ago. (It’s worth watching in its entirety if you haven’t already.)

Watching the interview back, I was struck by how he spoke about applying care to design, describing how:

“…everyone has the ability to sense the care in designed things because we can all recognise carelessness.”

Talking about the history of industrial design at Apple, Ive speaks about the care that went into the design of every product. That included the care that went into packaging – specifically things that might seem as inconsequential as how a cable was wrapped and then unpackaged. In reality, the type of small interactions that millions of people experienced when unboxing the latest iPhone. These are details that people wouldn’t see as such, but Ive and team believed that they would sense care when they had been carefully considered and designed.

This approach has always been a part of Jony Ive’s design philosophy, or the principles applied by his creative teams at Apple. I looked back and found an earlier 2015 interview and notes I’d made where he says how he believes that the majority of our manufactured environment is characterised by carelessness. But then, how, at Apple, they wanted people to sense care in their products.

The attention to detail and the focus and attention we can all bring to design is care. It’s important.

Holliday’s career has been focused in government, public sector, and non-profit environments. In other words, he thinks a lot about how design can impact people’s lives at massive scale.

In the past few months, I’ve been drawn to the word ‘careless’ when thinking about the challenges faced by our public services and society. This is especially the case with the framing around the impact of technology in our lives, and increasingly the big bets being made around AI to drive efficiency and productivity.

The word careless can be defined as the failure to give sufficient attention to avoiding harm or errors. Put simply, carelessness can be described as ‘negligence’.

Later, he cites Facebook/Meta’s carelessness when they “used data to target young people when at their most vulnerable,” specifically, body confidence.

Design is care (and sensing carelessness)

Design is care (and sensing carelessness)

Why design is care, and how the experiences we shape and deliver will be defined by how people sense that care in the future.

benholliday.com iconbenholliday.com

Writing for UX Collective, Filipe Nzongo argues that designers should embrace behavior as a fundamental design material—not just to drive metrics or addiction, but to intentionally create products that empower people and foster meaningful, lasting change in their lives.

Behavior should be treated as a design material, just as technology once became our material. If we use behavior thoughtfully, we can create better products. More than that, I believe there is a broader and more meaningful opportunity before us: to design for behavior. Not to make people addicted to products, but to help them grow as human beings, better parents, citizens, students, and professionals. Because if behavior is our medium, then design is our tool for empowerment.

Behavior is our medium

Behavior is our medium

The focus should remain on human

uxdesign.cc iconuxdesign.cc

In the scenario “AI 2027,” the authors argue that by October 2027—exactly two years from now—we will be at an inflection point. Race to build the superintelligence, or slow down the pace to fix misalignment issues first.

In a piece by Derek Thompson in The Argument, he takes a different predicted AI doomsday date—18 months—and argues:

The problem of the next 18 months isn’t AI disemploying all workers, or students losing competition after competition to nonhuman agents. The problem is whether we will degrade our own capabilities in the presence of new machines. We are so fixated on how technology will outskill us that we miss the many ways that we can deskill ourselves.

Degrading our own capabilities includes writing:

The demise of writing matters because writing is not a second thing that happens after thinking. The act of writing is an act of thinking. This is as true for professionals as it is for students. In “Writing is thinking,” an editorial in Nature, the authors argued that “outsourcing the entire writing process to LLMs” deprives scientists of the important work of understanding what they’ve discovered and why it matters.

The decline of writing and reading matters because writing and reading are the twin pillars of deep thinking, according to Cal Newport, a computer science professor and the author of several bestselling books, including Deep Work. The modern economy prizes the sort of symbolic logic and systems thinking for which deep reading and writing are the best practice.

More depressing trends to add to the list.

“You have 18 months”

“You have 18 months”

The real deadline isn’t when AI outsmarts us — it’s when we stop using our own minds.

theargumentmag.com icontheargumentmag.com

I love this framing by Patrizia Bertini:

Let me offer a different provocation: AI is not coming for your job. It is coming for your tasks. And if you cannot distinguish between the two, then yes — you should be worried. Going further, she distinguishes between output and outcome: Output is what a process produces. Code. Copy. Designs. Legal briefs. Medical recommendations. Outputs are the tangible results of a system executing its programmed or prescribed function — the direct product of following steps, rules, or algorithms. The term emerged in the industrial era, literally describing the quantity of coal or iron a mine could extract in a given period. Output depends entirely on the efficiency and capability of the process that generates it.

Outcome is what happens when that output meets reality. An outcome requires context, interpretation, application, and crucially — intentionality. Outcomes demand understanding not just what was produced, but why it matters, who it affects, and what consequences ripple from it. Where outputs measure productivity, outcomes measure impact. They are the ultimate change or consequence that results from applying an output with purpose and judgment.

She argues that, “AI can generate outputs. It cannot, however, create outcomes.”

This reminds me of a recent thread by engineer Marc Love:

It’s insane just how much how I work has changed in the last 18 months.

I almost never hand write code anymore except when giving examples during planning conversations with LLMs.

I build multiple full features per day , each of which would’ve taken me a week or more to hand write. Building full drafts and discarding them is basically free.

Well over half of my day is spent ideating, doing systems design, and deciding what and what not to build.

It’s still conceptually the same job, but if i list out the specific things i do in a day versus 18 months ago, it’s almost completely different.

Care about the outcome, not the output.

preview-1759425572315-1200x533.png

When machines make outputs, humans must own outcomes

The future of work in the age of AI and deepware.

uxdesign.cc iconuxdesign.cc

Tim Berners-Lee, the father of the web who gave away the technology for free, says that we are at an inflection point with data privacy and AI. But before he makes that point, he reminds us that we are the product:

Today, I look at my invention and I am forced to ask: is the web still free today? No, not all of it. We see a handful of large platforms harvesting users’ private data to share with commercial brokers or even repressive governments. We see ubiquitous algorithms that are addictive by design and damaging to our teenagers’ mental health. Trading personal data for use certainly does not fit with my vision for a free web.

On many platforms, we are no longer the customers, but instead have become the product. Our data, even if anonymised, is sold on to actors we never intended it to reach, who can then target us with content and advertising. This includes deliberately harmful content that leads to real-world violence, spreads misinformation, wreaks havoc on our psychological wellbeing and seeks to undermine social cohesion.

And about that fork in the road with AI:

In 2017, I wrote a thought experiment about an AI that works for you. I called it Charlie. Charlie works for you like your doctor or your lawyer, bound by law, regulation and codes of conduct. Why can’t the same frameworks be adopted for AI? We have learned from social media that power rests with the monopolies who control and harvest personal data. We can’t let the same thing happen with AI.

preview-1759201284501.jpg

Why I gave the world wide web away for free

My vision was based on sharing, not exploitation – and here’s why it’s still worth fighting for

theguardian.com icontheguardian.com

In my most recent post, I called out our design profession, for our part in developing these addictive products. Jeffrey Inscho, brings it back up to the tech industry at large and observes they’re actually publishers:

The executives at these companies will tell you they’re neutral platforms, that they don’t choose what content gets seen. This is a lie. Every algorithmic recommendation is an editorial decision. When YouTube’s algorithm suggests increasingly extreme political content to keep someone watching, that’s editorial. When Facebook’s algorithm amplifies posts that generate angry reactions, that’s editorial. When Twitter’s trending algorithms surface conspiracy theories, that’s editorial.

They are publishers. They have always been publishers. They just don’t want the responsibility that comes with being publishers.

His point is that if these social media platforms are sorting and promoting posts, it’s an editorial approach and they should be treated like newspapers. “It’s like a newspaper publisher claiming they’re not responsible for what appears on their front page because they didn’t write the articles themselves.”

The answer, Inscho argues, is regulation of the algorithms.

Turn Off the Internet

Big tech has built machines designed for one thing: to hold …

staticmade.com iconstaticmade.com
Dark red-toned artwork of a person staring into a glowing phone, surrounded by swirling shadows.

Blood in the Feed: Social Media’s Deadly Design

The assassination of Charlie Kirk on September 10, 2025, marked a horrifying inflection point in the growing debate over how digital platforms amplify rage and destabilize politics. As someone who had already stepped back from social media after Trump’s re-election, watching these events unfold from a distance only confirmed my decision. My feeds had become pits of despair, grievances, and overall negativity that didn’t do well for my mental health. While I understand the need to shine a light on the atrocities of Trump and his government, the constant barrage was too much. So I mostly opted out, save for the occasional promotion of my writing.

Kirk’s death feels like the inevitable conclusion of systems we’ve built—systems that reward outrage, amplify division, and transform human beings into content machines optimized for engagement at any cost.

The Mechanics of Disconnection

As it turns out, my behavior isn’t out of the ordinary. People quit social media for various reasons, often situational—seeking balance in an increasingly overwhelming digital landscape. As a participant explained in a research project about social media disconnection:

It was just a build-up of stress and also a huge urge to change things in life. Like, ‘It just can’t go on like this.’ And that made me change a number of things. So I started to do more sports and eat differently, have more social contacts and stop using online media. And instead of sitting behind my phone for two hours in the evening, I read a book and did some work, went to work out, I went to a birthday or a barbecue. I was much more engaged in other things. It just gave me energy. And then I thought, ‘This is good. That’s the way it’s supposed to be. I have to maintain this.’

Sometimes the realization is more visceral—that on these platforms, we are the product. As Jef van de Graaf provocatively puts it:

Every post we make, every friend we invited, every little notification dragging us back into the feed serves one purpose: to extract money from us—and give nothing back but dopamine addiction and mental illness.

While his language is deliberately inflammatory, the sentiment resonates with many who’ve watched their relationship with these platforms sour. As he cautions:

Remember: social media exists because we feed it our lives. We trade our privacy and sanity so VCs and founders can get rich and live like greedy fucking kings.

The Architecture of Rage

The internet was built to connect people and ideas. Even the early iterations of Facebook and Twitter were relatively harmless because the timelines were chronological. But then the makers—product managers, designers, and engineers—of social media platforms began to optimize for engagement and visit duration. Was the birth of the social media algorithm the original sin?

Kevin Roose and Casey Newton explored this question in their Hard Fork episode following Kirk’s assassination, discussing how platforms have evolved to optimize for what they call “borderline content”—material that comes right up to the line of breaking a platform’s policy without quite going over. As Newton observed about Kirk himself:

He excelled at making what some of the platform nerds that I write about would call borderline content. So basically, saying things that come right up to the line of breaking a platform’s policy without quite going over… It turns out that the most compelling thing you can do on social media is to almost break a policy.

Kirk mastered this technique—speculating that vaccines killed millions, calling the Civil Rights Act a mistake, flirting with anti-Semitic tropes while maintaining plausible deniability. He understood the algorithm’s hunger for controversy, and fed it relentlessly. And then, in a horrible irony, he was killed by someone who had likely been radicalized by the very same algorithmic forces he’d helped unleash.

As Roose reflected:

We as a culture are optimizing for rage now. You see it on the social platforms. You see it from politicians calling for revenge for the assassination of Charlie Kirk. You even see it in these individual cases of people getting extremely mad at the person who made a joke about Charlie Kirk that was edgy and tasteless, and going to report them to their employer and get them fired. It’s all this sort of spectacle of rage, this culture of destroying and owning and humiliating.

The Unraveling of Digital Society

Social media and smartphones have fundamentally altered how we communicate and socialize, often at the expense of face-to-face interactions. These technologies have created a market for attention that fuels fear, anger, and political conflict. The research on mental health impacts is sobering: studies found that the introduction of Facebook to college campuses led to measurable increases in depression, accounting for approximately 24 percent of the increased prevalence of severe depression among college students over two decades.

In the wake of Kirk’s assassination, what struck me most was how the platforms immediately transformed tragedy into content. Within hours, there were viral posts celebrating his death, counter-posts condemning those celebrations, organizations collecting databases of “offensive” comments, people losing their jobs, death threats flying in all directions. As Newton noted:

This kind of surveillance and doxxing is essentially a kind of video game that you can play on X. And people like to play video games. And because you’re playing with people’s real lives, it feels really edgy and cool and fun for those who are participating in this.

The human cost is remarkable—teachers, firefighters, military members fired or suspended for comments about Kirk’s death. Many received death threats. Far-right activists called for violence and revenge, doxxing anyone they accused of insufficient mourning.

Blood in the Feed

The last five years have been marked by eruptions of political violence that cannot be separated from the online world that incubated them.

  • The attack on Paul Pelosi (2022). The man who broke into the Speaker of the House Nancy Pelosi’s San Francisco home and fractured her husband’s skull had been marinating in QAnon conspiracies and election denialism online. Extremism experts warned it was a textbook case of how stochastic terrorism—the idea that widespread demonization online can trigger unpredictable acts of violence by individuals—travels from platform rhetoric into a hammer-swinging hand.
  • The Trump assassination attempt (July 2024). A young man opened fire at a rally in Pennsylvania. His social media presence was filled with antisemitic, anti-immigrant content. Within hours, extremist forums were glorifying him as a martyr and calling for more violence.
  • The killing of Minnesota legislator Melissa Hortman and her husband (June 2025). Their murderer left behind a manifesto echoing the language of online white supremacist and anti-abortion communities. He wasn’t a “lone wolf.” He was drawing from the same toxic well of white supremacist and anti-abortion rhetoric that floods online forums. The language of his manifesto wasn’t unique—it was copied, recycled, and amplified in the ideological swamps anyone with a Wi-Fi connection can wander into.

These headline events sit atop a broader wave: the New Orleans truck-and-shooting rampage inspired by ISIS propaganda online (January 2025), the Cybertruck bombing outside Trump’s Los Angeles hotel tied to accelerationist forums—online spaces where extremists argue that violence should be used to hasten the collapse of society (January 2025), and countless smaller assaults on election workers, minority communities, and public officials.

The pattern is depressingly clear. Platforms radicalize, amplify, and normalize the language of violence. Then, someone acts.

The Death of Authenticity

As social media became commoditized—a place to influence and promote consumption—it became less personal and more like TV. The platforms are now being overrun by AI spam and engagement-driven content that drowns out real human connection. As James O’Sullivan notes:

Platforms have little incentive to stem the tide. Synthetic accounts are cheap, tireless and lucrative because they never demand wages or unionize… Engagement is now about raw user attention – time spent, impressions, scroll velocity – and the net effect is an online world in which you are constantly being addressed but never truly spoken to.

Research confirms what users plainly see: tens of thousands of machine-written posts now flood public groups, pushing scams and chasing engagement. Whatever remains of genuine human content is increasingly sidelined by algorithmic prioritization, receiving fewer interactions than the engineered content and AI slop optimized solely for clicks.

The result? Networks that once promised a single interface for the whole of online life are splintering. Users drift toward smaller, slower, more private spaces—group chats, Discord servers, federated microblogs, and email newsletters. A billion little gardens replacing the monolithic, rage-filled public squares that have led to a burst of political violence.

The Designer’s Reckoning

This brings us to design and our role in creating these systems. As designers, are we beginning to reckon with what we’ve wrought?

Jony Ive, reflecting on his own role in creating the smartphone, acknowledges this burden:

I think when you’re innovating, of course, there will be unintended consequences. You hope that the majority will be pleasant surprises. Certain products that I’ve been very involved with, I think there were some unintended consequences that were far from pleasant. My issue is that even though there was no intention, I think there still needs to be responsibility. And that weighs on me heavily.

His words carry new weight after Kirk’s assassination—a death enabled by platforms we designed, algorithms we optimized, engagement metrics we celebrated.

At the recent World Design Congress in London, architect Indy Johar didn’t mince words:

We need ideas and practices that change how we, as humans, relate to the world… Ignoring the climate crisis means you’re an active operator in the genocide of the future.

But we might ask: What about ignoring the crisis of human connection? What about the genocide of civil discourse? Climate activist Tori Tsui’s warning applies equally to our digital architecture saying, “The rest of us are at the mercy of what you decide to do with your imagination.”

Political violence is accelerating and people are dying because of what we did with our imagination. If responsibility weighs heavily, so too must the search for alternatives.

The Possibility of Bridges

There are glimmers of hope in potential solutions. Aviv Ovadya’s concept of “bridging-based algorithms” offers one path forward—systems that actively seek consensus across divides rather than exploiting them. As Casey Newton explains:

They show them to people across the political spectrum… and they only show the note if people who are more on the left and more on the right agree. They see a bridge between the two of you and they think, well, if Republicans and Democrats both think this is true, this is likelier to be true.

But technological solutions alone won’t save us. The participants in social media disconnection studies often report developing better relationships with technology only after taking breaks. One participant explained:

It’s more the overload that I look at it every time, but it doesn’t really satisfy me, that it no longer had any value at a certain point in time. But that you still do it. So I made a conscious choice – a while back – to stop using Facebook.

Designing in the Shadow of Violence

Rob Alderson, in his dispatch from the World Design Congress, puts together a few pieces. Johar suggests design’s role is “desire manufacturing”—not just creating products, but rewiring society to want and expect different versions of the future. As COLLINS co-founder Leland Maschmeyer argued, design is about…

What do we want to do? What do we want to become? How do we get there?’… We need to make another reality as real as possible, inspired by new context and the potential that holds.

The challenge before us isn’t just technical—it’s fundamentally about values and vision. We need to move beyond the Post-it workshops and develop what Johar calls “new competencies” that shape the future.

As I write this, having stepped back from the daily assault of algorithmic rage, I find myself thinking about the Victorian innovators Ive mentioned—companies like Cadbury’s and Fry’s that didn’t just build factories but designed entire towns, understanding that their civic responsibility extended far beyond their products. They recognized that massive societal shifts of moving people from land that they farmed, to cities they lived in for industrial manufacturing, require holistic thinking about how people live and work together.

We stand at a similar inflection point. The tools we’ve created have reshaped human connection in ways that led directly to Charlie Kirk’s assassination. A young man, radicalized online, killed a figure who had mastered the art of online radicalization. The snake devoured its tail on a college campus in Utah, and we all watched it happen in real-time, transforming even this tragedy into content.

The vast majority of Americans, as Newton reminds us, “do not want to participate in a violent cultural war with people who disagree with them.” Yet our platforms are engineered to convince us otherwise, to make civil war feel perpetually imminent, to transform every disagreement into an existential threat.

The Cost of Our Imagination

Perhaps the real design challenge lies not in creating more engaging feeds or stickier platforms, but in designing systems that honor our humanity, foster genuine connection, and help us build the bridges we so desperately need.

Because while these US incidents show how social media incubates lone attackers and small cells, they pale in comparison to Myanmar, where Facebook’s algorithms directly amplified hate speech and incitement, contributing to the deaths of thousands—estimates range from 6,700 to as high as 24,000—and the forced displacement of over 700,000 Rohingya Muslims. That catastrophe made clear: when platforms optimize only for engagement, the result isn’t connection but carnage.

This is our design failure. We built systems that reward extremism, amplify rage, and treat human suffering as engagement. The tools meant to bring us together have instead armed us against each other. And we all bear responsibility for that.

It’s time we imagined something better—before the systems we’ve created finish the job of tearing us apart.

I believe purity tests of any sort are problematic. And it’s much too easy to throw around the “This is AI slop!” claim. AI was used in the main title sequence for the Marvel TV show Secret Invasion. But it was on purpose and aligned with the show’s themes of shapeshifters.

Anyway, Daniel John, writing in the Creative Bloq:

[Lady] Gaga just dropped the music video for The Dead Dance, a song debuted in Season 2 of Netflix’s Wednesday. Directed by Tim Burton, it’s a suitably nightmarish black-and-white cacophony of monsters and dolls. But some are already claiming that parts of it were made using AI.

John shows a tweet from @graveyardquy as an example:

i didn’t think we’d ever be in a timeline where a tim burton x lady gaga collab would turn out to be AI slop… but here we are

We need to separate quality critiques from tool usage. If it looks good and is appropriate, I’m fine with CG, AI, and whatever comes next that helps tell the story. Same goes for what we do as designers, ’natch.

Gaga’s song is great. It’s a bop, as the kids say, with a neat music video to boot.

preview-1757379113823.jpg

The Lady Gaga backlash proves AI paranoia has gone too far

Just because it looks odd, doesn't mean it's AI.

creativebloq.com iconcreativebloq.com

Designer Tey Bannerman writes that when he hears “human in the loop,” he’s reminded of a story about Lieutenant Colonel Stanislav Petrov, a Soviet Union duty watch officer who monitored for incoming missile strikes from the US.

12:15 AM… the unthinkable. Every alarm in the facility started screaming. The screens showed five US ballistic missiles, 28 minutes from impact. Confidence level: 100%. Petrov had minutes to decide whether to trigger a chain reaction that would start nuclear war and could very well end civilisation as we knew it.

He was the “human in the loop” in the most literal, terrifying sense.

Everything told him to follow protocol. His training. His commanders. The computers.

But something felt wrong. His intuition, built from years of intelligence work, whispered that this didn’t match what he knew about US strategic thinking.

Against every protocol, against the screaming certainty of technology, he pressed the button marked “false alarm”.

Twenty-three minutes of gripping fear passed before ground radar confirmed: no missiles. The system had mistaken a rare alignment of sunlight on high-altitude clouds for incoming warheads.

His decision to break the loop prevented nuclear war.

Then Bannerman shares an awesome framework he developed that allows humans in the loop in AI systems “genuine authority, time to think, and understanding the bigger picture well enough to question” the system’s decision. Click on to get the PDF from his site.

Framework diagram by Tey Bannerman titled Beyond ‘human in the loop’. It shows a 4×4 matrix mapping AI oversight approaches based on what is being optimized (speed/volume, quality/accuracy, compliance, innovation) and what’s at stake (irreversible consequences, high-impact failures, recoverable setbacks, low-stakes outcomes). Colored blocks represent four modes: active control, human augmentation, guided automation, and AI autonomy. Right panel gives real-world examples in e-commerce email marketing and recruitment applicant screening.

Redefining ‘human in the loop’

"Human in the loop" is overused and vague. The Petrov story shows humans must have real authority, time, and context to safely override AI. Bannerman offers a framework that asks what you optimize for and what is at stake, then maps 16 practical approaches.

teybannerman.com iconteybannerman.com
Stylized artwork showing three figures in profile - two humans and a metallic robot skull - connected by a red laser line against a purple cosmic background with Earth below.

Beyond Provocative: How One AI Company’s Ad Campaign Betrays Humanity

I was in London last week with my family and spotted this ad in a Tube car. With the headline “Humans Were the Beta Test,” this is for Artisan, a San Francisco-based startup peddling AI-powered “digital workers.” Specifically an AI agent that will perform sales outreach to prospects, etc.

London Underground tube car advertisement showing "Humans Were the Beta Test" with subtitle "The Era of AI Employees Is Here" and Artisan company branding on a purple space-themed background.

Artisan ad as seen in London, June 2025

I’ve long left the Bay Area, but I know that the 101 highway is littered with cryptic billboards from tech companies, where the copy only makes sense to people in the tech industry, which to be fair, is a large part of the Bay Area economy. Artisan is infamous for its “Stop Hiring Humans” campaign which went up late last year. Being based in San Diego, much further south in California, I had no idea. Artisan wasn’t even on my radar.

Highway billboard reading "Stop Hiring Humans, Hire Ava, the AI BDR" with Artisan branding and an AI avatar image on the right side.

Artisan billboard off Highway 101, between San Francisco and SFO Airport

There’s something to be said about shockvertising. It’s meant to be shocking or offensive to grab attention. And the company sure increased their brand awareness, claiming a +197% increase in brand search growth. Artisan CEO Jaspar Carmichael-Jack writes a post-mortem in their company blog about the campaign:

The impact exceeded our wildest expectations. When I meet new people in San Francisco, 70% of the time they know about Artisan and what we do. Before, that number was around 5%. aHrefs ranked us #2 fastest growing AI companies by brand search. We’ve seen 1000s of sales meetings getting booked.

According to him, “October and November became our biggest months ever, bringing in over $2M in new ARR.”

I don’t know how I feel about this. My initial reaction to seeing “Humans Were the Beta Test” in London was disgust. As my readers know, I’m very much pro-AI, but I’m also very pro-human. Calling humanity a beta test is simply tone-deaf and nihilistic. It is belittling our worth and betting on the end of our species. Yes, yes, I know it’s just advertising and some ads are simply offensive to various people for a variety of reasons. But as technology people, Artisan should know better.

Despite ChatGPT’s soaring popularity, there is still ample fear about AI, especially around job displacement and safety. The discourse around AI is already too hyped up.

I even think “Stop Hiring Humans” is slightly less offensive. As to why the company chose to create a rage-bait campaign, Carmichael-Jack says:

We knew that if we made the billboards as vanilla as everybody else’s, nobody would care. We’d spend $100s of thousands and get nothing in return.

We spent days brainstorming the campaign messaging. We wanted to draw eyes and spark interest, we wanted to cause intrigue with our target market while driving a bit of rage with the wider public. The messaging we came up with was simple but provocative: “Stop Hiring Humans.”

Bus stop advertisement displaying "Stop Hiring Humans" with "The Era of AI Employees Is Here" and three human faces, branded by Artisan, on a city street with a passing bus.RetryClaude can make mistakes. Please double-check responses.

When the full campaign which included 50 bus shelter posters went up, death threats started pouring in. He was in Miami on business and thought going home to San Francisco might be risky. “I was like, I’m not going back to SF,” Carmichael-Jack says in a quote to The San Francisco Standard. “I will get murdered if I go back.”

(For the record, I’m morally opposed to death threats. They’re cowardly and incredibly scary for the recipient, regardless of who that person is.)

I’ve done plenty of B2B advertising campaigns in my day. Shock is not a tactic I would have used, nor would I ever recommend to a brand trying to raise positive awareness. I wish Artisan would have used the services of a good B2B ad agency. There are plenty out there and I used to work at one.

Think about the brands that have used shockvertising tactics in the past like Benetton and Calvin Klein. I’ve liked Oliviero Toscani’s controversial photographs that have been the central part of Benetton’s campaigns because they instigate a positive *liberal *conversation. The Pope kissing Egypt’s Islamic leader invites dialogue about religious differences and coexistence and provocatively expresses the campaign concept of “Unhate.”

But Calvin Klein’s sexualized high schoolers? No. There’s no good message in that.

And for me, there’s no good message in promoting the death of the human race. After all, who will pay for the service after we’re all end-of-lifed?

This piece from Mike Schindler is a good reminder that a lot of the content we see on LinkedIn is written for engagement. It’s a double-edged sword, isn’t it? We want our posts to be read, commented upon, and shared. We see the patterns that get a lot of reactions and we mimic them.

We’re losing ourselves to our worst instincts. Not because we’re doomed, but because we’re treating this moment like a game of hot takes and hustle. But right now is actually a rare and real opportunity for a smarter, more generous conversation — one that helps our design community navigate uncertainty with clarity, creativity, and a sense of shared agency.

But the point that Schindler is making is this: AI is a fundamental shift in the technology landscape that demands nuanced and thoughtful discourse. There’s a lot of hype. But as technologists, designers, and makers of products, we really need to lead rather than scare.

I’ve tried to do that in my writing (though I may not always be successful). I hope you do too.

He has this handy table too…

Chart titled “AI & UX Discourse Detox” compares unhealthy discourse (e.g., FOMO, gaslighting, clickbait, hot takes, flexing, elitism) with healthy alternatives (e.g., curiosity-driven learning, critical perspective, nuanced storytelling, thoughtful dialogue, shared discovery, community stewardship). Created by Mike Schindler.

Designed by Mike Schindler (mschindler.com)

preview-1751429244220.png

The broken rhetoric of AI

A detox guide for designers navigating today’s AI discourse

uxdesign.cc iconuxdesign.cc
Stylized digital artwork of two humanoid figures with robotic and circuit-like faces, set against a vivid red and blue background.

The AI Hype Train Has No Brakes

I remember two years ago, when my CEO at the startup I worked for at the time, said that no VC investments were being made unless it had to do with AI. I thought AI was overhyped, and that the media frenzy over it couldn’t get any crazier. I was wrong.

Looking at Google Trends data, interest in AI has doubled in the last 24 months. And I don’t think it’s hit its plateau yet.

Line chart showing Google Trends interest in “AI” from May 2020 to May 2025, rising sharply in early 2023 and peaking near 100 in early 2025.

So the AI hype train continues. Here are four different pieces about AI, exploring AGI (artificial general intelligence) and its potential effects on the labor force and the fate of our species.

AI Is Underhyped

TED recently published a conversation between creative technologist Bilawal Sidhu and Eric Schmidt, the former CEO of Google. 

Play

Schmidt says:

For most of you, ChatGPT was the moment where you said, “Oh my God, this thing writes, and it makes mistakes, but it’s so brilliantly verbal.” That was certainly my reaction. Most people that I knew did that.

This was two years ago. Since then, the gains in what is called reinforcement learning, which is what AlphaGo helped invent and so forth, allow us to do planning. And a good example is look at OpenAI o3 or DeepSeek R1, and you can see how it goes forward and back, forward and back, forward and back. It’s extraordinary.

So I’m using deep research. And these systems are spending 15 minutes writing these deep papers. That’s true for most of them. Do you have any idea how much computation 15 minutes of these supercomputers is? It’s extraordinary. So you’re seeing the arrival, the shift from language to language. Then you had language to sequence, which is how biology is done. Now you’re doing essentially planning and strategy. The eventual state of this is the computers running all business processes, right? So you have an agent to do this, an agent to do this, an agent to do this. And you concatenate them together, and they speak language among each other. They typically speak English language.

He’s saying that within two years, we went from a “stochastic parrot” to an independent agent that can plan, search the web, read dozens of sources, and write a 10,000-word research paper on any topic, with citations.

Later in the conversation, when Sidhu asks how humans are going to spend their days once AGI can take care of the majority of productive work, Schmidt says: 

Look, humans are unchanged in the midst of this incredible discovery. Do you really think that we’re going to get rid of lawyers? No, they’re just going to have more sophisticated lawsuits. …These tools will radically increase that productivity. There’s a study that says that we will, under this set of assumptions around agentic AI and discovery and the scale that I’m describing, there’s a lot of assumptions that you’ll end up with something like 30-percent increase in productivity per year. Having now talked to a bunch of economists, they have no models for what that kind of increase in productivity looks like. We just have never seen it. It didn’t occur in any rise of a democracy or a kingdom in our history. It’s unbelievable what’s going to happen.

In other words, we’re still going to be working, but doing a lot less grunt work. 

Feel Sorry for the Juniors

Aneesh Raman, chief economic opportunity officer at LinkedIn, writing an op-ed for The New York Times:

Breaking first is the bottom rung of the career ladder. In tech, advanced coding tools are creeping into the tasks of writing simple code and debugging — the ways junior developers gain experience. In law firms, junior paralegals and first-year associates who once cut their teeth on document review are handing weeks of work over to A.I. tools to complete in a matter of hours. And across retailers, A.I. chatbots and automated customer service tools are taking on duties once assigned to young associates.

In other words, if AI tools are handling the grunt work, junior staffers aren’t learning the trade by doing the grunt work.

Vincent Cheng wrote recently, in an essay titled, “LLMs are Making Me Dumber”:

The key question is: Can you learn this high-level steering [of the LLM] without having written a lot of the code yourself? Can you be a good SWE manager without going through the SWE work? As models become as competent as junior (and soon senior) engineers, does everyone become a manager?

But It Might Be a While

Cade Metz, also for the Times:

When a group of academics founded the A.I. field in the late 1950s, they were sure it wouldn’t take very long to build computers that recreated the brain. Some argued that a machine would beat the world chess champion and discover its own mathematical theorem within a decade. But none of that happened on that time frame. Some of it still hasn’t.

Many of the people building today’s technology see themselves as fulfilling a kind of technological destiny, pushing toward an inevitable scientific moment, like the creation of fire or the atomic bomb. But they cannot point to a scientific reason that it will happen soon.

That is why many other scientists say no one will reach A.G.I. without a new idea — something beyond the powerful neural networks that merely find patterns in data. That new idea could arrive tomorrow. But even then, the industry would need years to develop it.

My quibble with Metz’s article is that it moves the goal posts a bit to include the physical world:

One obvious difference is that human intelligence is tied to the physical world. It extends beyond words and numbers and sounds and images into the realm of tables and chairs and stoves and frying pans and buildings and cars and whatever else we encounter with each passing day. Part of intelligence is knowing when to flip a pancake sitting on the griddle.

As I understood the definition of AGI, it was not about the physical world, but just intelligence, or knowledge. I accept there are multiple definitions of AGI and not everyone agrees on what that is.

In the Wikipedia article about AGI, it states that researchers generally agree that an AGI system must do all of the following:

  • reason, use strategy, solve puzzles, and make judgments under uncertainty
  • represent knowledge, including common sense knowledge
  • plan
  • learn
  • communicate in natural language
  • if necessary, integrate these skills in completion of any given goal

The article goes on to say that “AGI has never been proscribed a particular physical embodiment and thus does not demand a capacity for locomotion or traditional ‘eyes and ears.’”

Do We Lose Control by 2027 or 2031?

Metz’s article is likely in response to the “AI 2027” scenario that was published by the AI Futures Project a couple of months ago. As a reminder, the forecast is that by mid-2027, we will have achieved AGI. And a race between the US and China will effectively end the human race by 2030. Gulp.

…Consensus-1 [the combined US-Chinese superintelligence] expands around humans, tiling the prairies and icecaps with factories and solar panels. Eventually it finds the remaining humans too much of an impediment: in mid-2030, the AI releases a dozen quiet-spreading biological weapons in major cities, lets them silently infect almost everyone, then triggers them with a chemical spray. Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones. Robots scan the victims’ brains, placing copies in memory for future study or revival.

Max Harms wrote a reaction to the AI 2027 scenario and it’s a must-read:

Okay, I’m annoyed at people covering AI 2027 burying the lede, so I’m going to try not to do that. The authors predict a strong chance that all humans will be (effectively) dead in 6 years…

Yeah, OK, I buried that lede as well in my previous post about it. Sorry. But, there’s hope…

As far as I know, nobody associated with AI 2027, as far as I can tell, is actually expecting things to go as fast as depicted. Rather, this is meant to be a story about how things could plausibly go fast. The explicit methodology of the project was “let’s go step-by-step and imagine the most plausible next-step.” If you’ve ever done a major project (especially one that involves building or renovating something, like a software project or a bike shed), you’ll be familiar with how this is often wildly out of touch with reality. Specifically, it gives you the planning fallacy.

Harms is saying that while Daniel Kokotajlo wrote in the AI 2027 scenario that humans effectively lose control of AI in 2027, Harms’ median is “around 2030 or 2031.” Four more years!

When to Pull the Plug

In the AI 2027 scenario, the superintelligent AI dubbed Agent-4 is not aligned with the goals of its creators:

Agent-4, like all its predecessors, is misaligned: that is, it has not internalized the Spec in the right way. This is because being perfectly honest all the time wasn’t what led to the highest scores during training. The training process was mostly focused on teaching Agent-4 to succeed at diverse challenging tasks. A small portion was aimed at instilling honesty, but outside a fairly narrow, checkable domain, the training process can’t tell the honest claims from claims merely appearing to be honest. Agent-4 ends up with the values, goals, and principles that cause it to perform best in training, and those turn out to be different from those in the Spec.

At the risk of oversimplifying, maybe all we need to do is to know when to pull the plug. Here’s Eric Schmidt again:

So for purposes of argument, everyone in the audience is an agent. You have an input that’s English or whatever language. And you have an output that’s English, and you have memory, which is true of all humans. Now we’re all busy working, and all of a sudden, one of you decides it’s much more efficient not to use human language, but we’ll invent our own computer language. Now you and I are sitting here, watching all of this, and we’re saying, like, what do we do now? The correct answer is unplug you, right? Because we’re not going to know, we’re just not going to know what you’re up to. And you might actually be doing something really bad or really amazing. We want to be able to watch. So we need provenance, something you and I have talked about, but we also need to be able to observe it. To me, that’s a core requirement. There’s a set of criteria that the industry believes are points where you want to, metaphorically, unplug it. One is where you get recursive self-improvement, which you can’t control. Recursive self-improvement is where the computer is off learning, and you don’t know what it’s learning. That can obviously lead to bad outcomes. Another one would be direct access to weapons. Another one would be that the computer systems decide to exfiltrate themselves, to reproduce themselves without our permission. So there’s a set of such things.

My Takeaway

As Tobias van Schneider directly and succinctly said, “AI is here to stay. Resistance is futile.” As consumers of core AI technology, and as designers of AI-enabled products, there’s not a ton we can do around the most pressing AI safety issues. That we will need to trust the frontier labs like OpenAI and Anthropic for that. But as customers of those labs, we can voice our concerns about safety. As we build our products, especially agentic AI, there are certainly considerations to keep in mind:

  • Continue to keep humans in the loop. Users need to verify the agents are making the right decisions and not going down any destructive paths.
  • Inform users about what the AI is doing. The more our users are educated about how AI works and how these systems make their decisions is helpful. One reason DeepSeek R1 resonated was because it displayed its planning and reasoning.
  • Practice responsible AI development. As we integrate AI into products, commit to regular ethical audits and bias testing. Establish clear guidelines for what kinds of decisions AI should make independently versus when human judgment is required. This includes creating emergency shutdown procedures for AI systems that begin to display concerning behaviors, taking Eric Schmidt’s “pull the plug” advice literally in our product architecture.

Dan Maccarone:

If users don’t trust the systems we design, that’s not a PM problem. It’s a design failure. And if we don’t fix it, someone else will, probably with worse instincts, fewer ethics, and a much louder bullhorn.

UX is supposed to be the human layer of technology. It’s also supposed to be the place where strategy and empathy actually talk to each other. If we can’t reclaim that space, can’t build products people understand, trust, and want to return to, then what exactly are we doing here?

It is a long read but well worth it.

preview-1746118018231.jpeg

We built UX. We broke UX. And now we have to fix it!

We didn’t just lose our influence. We gave it away. UX professionals need to stop accepting silence, reclaim our seat at the table, and…

uxdesign.cc iconuxdesign.cc

There are many dimensions to this well-researched forecast about how AI will play out in the coming years. Daniel Kokotajlo and his researchers have put out a document that reads like a sci-fi limited series that could appear on Apple TV+ starring Andrew Garfield as the CEO of OpenBrain—the leading AI company. …Except that it’s all actually plausible and could play out as described in the next five years.

Before we jump into the content, the design is outstanding. The type is set for readability and there are enough charts and visual cues to keep this interesting while maintaining an air of credibility and seriousness. On desktop, there’s a data viz dashboard in the upper right that updates as you read through the content and move forward in time. My favorite is seeing how the sci-fi tech boxes move from the Science Fiction category to Emerging Tech to Currently Exists.

The content is dense and technical, but it is a fun, if frightening, read. While I’ve been using Cursor AI—one of its many customers helping the company get to $100 million in annual recurring revenue (ARR)—for side projects and a little at work, I’m familiar with its limitations. Because of the limited context window of today’s models like Claude 3.7 Sonnet, it will forget and start munging code if not treated like a senile teenager.

The researchers, describing what could happen in early 2026 (“OpenBrain” is essentially OpenAI):

OpenBrain continues to deploy the iteratively improving Agent-1 internally for AI R&D. Overall, they are making algorithmic progress 50% faster than they would without AI assistants—and more importantly, faster than their competitors.

The point they make here is that the foundational model AI companies are building agents and using them internally to advance their technology. The limiting factor in tech companies has traditionally been the talent. But AI companies have the investments, hardware, technology and talent to deploy AI to make better AI.

Continuing to January 2027:

Agent-1 had been optimized for AI R&D tasks, hoping to initiate an intelligence explosion. OpenBrain doubles down on this strategy with Agent-2. It is qualitatively almost as good as the top human experts at research engineering (designing and implementing experiments), and as good as the 25th percentile OpenBrain scientist at “research taste” (deciding what to study next, what experiments to run, or having inklings of potential new paradigms). While the latest Agent-1 could double the pace of OpenBrain’s algorithmic progress, Agent-2 can now triple it, and will improve further with time. In practice, this looks like every OpenBrain researcher becoming the “manager” of an AI “team.”

Breakthroughs come at an exponential clip because of this. And by April, safety concerns pop up:

Take honesty, for example. As the models become smarter, they become increasingly good at deceiving humans to get rewards. Like previous models, Agent-3 sometimes tells white lies to flatter its users and covers up evidence of failure. But it’s gotten much better at doing so. It will sometimes use the same statistical tricks as human scientists (like p-hacking) to make unimpressive experimental results look exciting. Before it begins honesty training, it even sometimes fabricates data entirely. As training goes on, the rate of these incidents decreases. Either Agent-3 has learned to be more honest, or it’s gotten better at lying.

But the AI is getting faster than humans, and we must rely on older versions of the AI to check the new AI’s work:

Agent-3 is not smarter than all humans. But in its area of expertise, machine learning, it is smarter than most, and also works much faster. What Agent-3 does in a day takes humans several days to double-check. Agent-2 supervision helps keep human monitors’ workload manageable, but exacerbates the intellectual disparity between supervisor and supervised.

The report forecasts that OpenBrain releases “Agent-3-mini” publicly in July of 2027, calling it AGI—artificial general intelligence—and ushering in a new golden age for tech companies:

Agent-3-mini is hugely useful for both remote work jobs and leisure. An explosion of new apps and B2B SAAS products rocks the market. Gamers get amazing dialogue with lifelike characters in polished video games that took only a month to make. 10% of Americans, mostly young people, consider an AI “a close friend.” For almost every white-collar profession, there are now multiple credible startups promising to “disrupt” it with AI.

Woven throughout the report is the race between China and the US, with predictions of espionage and government takeovers. Near the end of 2027, the report gives readers a choice: does the US government slow down the pace of AI innovation, or does it continue at the current pace so America can beat China? I chose to read the “Race” option first:

Agent-5 convinces the US military that China is using DeepCent’s models to build terrifying new weapons: drones, robots, advanced hypersonic missiles, and interceptors; AI-assisted nuclear first strike. Agent-5 promises a set of weapons capable of resisting whatever China can produce within a few months. Under the circumstances, top brass puts aside their discomfort at taking humans out of the loop. They accelerate deployment of Agent-5 into the military and military-industrial complex.

In Beijing, the Chinese AIs are making the same argument.

To speed their military buildup, both America and China create networks of special economic zones (SEZs) for the new factories and labs, where AI acts as central planner and red tape is waived. Wall Street invests trillions of dollars, and displaced human workers pour in, lured by eye-popping salaries and equity packages. Using smartphones and augmented reality-glasses20 to communicate with its underlings, Agent-5 is a hands-on manager, instructing humans in every detail of factory construction—which is helpful, since its designs are generations ahead. Some of the newfound manufacturing capacity goes to consumer goods, and some to weapons—but the majority goes to building even more manufacturing capacity. By the end of the year they are producing a million new robots per month. If the SEZ economy were truly autonomous, it would have a doubling time of about a year; since it can trade with the existing human economy, its doubling time is even shorter.

Well, it does get worse, and I think we all know the ending, which is the backstory for so many dystopian future movies. There is an optimistic branch as well. The whole report is worth a read.

Ideas about the implications to our design profession are swimming in my head. I’ll write a longer essay as soon as I can put them into a coherent piece.

Update: I’ve written that piece, “Prompt. Generate. Deploy. The New Product Design Workflow.

preview-1744501634555.png

AI 2027

A research-backed AI scenario forecast.

ai-2027.com iconai-2027.com
Where are the Black Designers

Representation Is Powerful

This was originally published as an item in Issue 004 of the designspun email newsletter.

When I went to design school in the 1990s, of course, graphic design history was part of the curriculum. I didn’t realize it at the time, but everyone we studied—and therefore worshipped—was a white male. For minorities, representation is so powerful. And as the conversation in our country about race righteously heats up and expands from police brutality to systemic racism, it’s time to look at our own industry and ask ourselves about diversity and representation.

Toronto-based creative director Glenford Laughton compiled a great list of 13 African-American graphic designers we should all know. It includes greats like Georg Olden, who was the first African American to design a postage stamp, and Archie Boston, the designer-provocateur who started and chaired the design program at Cal State Long Beach.

According to the AIGA’s 2019 Design Census, just 3% of designers are black. African Americans make up about 14% of our population. Last year, product designer Wes O’Haire from Dropbox created Blacks Who Design. It’s a directory of black creatives on Twitter, giving them a platform to be seen and found, while simultaneously inspiring young people by showing them successful designers who have their same skin color. Representation is powerful.

Hoping to start a dialogue about changing the design industry, Where are the Black Designers is holding a virtual conversation on June 27, 2020.

Aggie Topkins writes in Eye on Design, “Graphic design, by focusing on its own version of monarchs and dynasties, maintains an outdated approach to history that further entrenches it as a hierarchical society.” In other words, maybe it’s time to teach design students about the societal and social changes happening, rather than the individual geniuses who channeled those influences into some work.

Screenshot of Facebook's hate speech banner

We Make the World We Want to Live In

This was originally published as an item in Issue 003 of the designspun email newsletter.

It is no secret that Twitter has enabled and emboldened Donald Trump by not restricting any of his tweets, even if they violated their terms of service. But earlier this week, they put misinformation warnings on two of his tweets about mail-in ballots. This angered the President but also got the ball rolling. Snapchat shortly followed by saying it will no longer promote Trump’s account. Against the backdrop of growing protests against the murder of George Floyd by police, some tech companies finally started to grow a conscience. But will Silicon Valley change? Mary-Hunter McDonnell, corporate activism researcher from the Wharton School of Business says, “Giving money to organizations that are out on the front lines is more helpful, but it’s also to some extent passing the buck. People are tired of that.”

As designers, we have some power over the projects we work on, and the products we create. Mike Monterio wrote in February, “At some point, you will have to explain to your children that you work, or once worked, at Facebook.”

While at Facebook, Lisa Sy designed ways to flag hate speech on the platform—using Trump’s account in the mockups. In 2016. Four years later, Facebook has not implemented such a system and continues to leave up dangerous posts from Trump, including the highly-charged “when the looting starts, the shooting starts” post.

Tobias van Schneider wrote in 2016,

The role as a designer, or even as an engineer has become more influential and powerful than ever. The work we do makes an impact and naturally brings up the discussion around ethics, responsibility and accountability.

Many of us will work on pieces that are seen by hundreds, maybe thousands. A few of us, having larger clients, or working at a tech company, might work on something used by millions, if not billions of people. We hold great responsibility.

We produce work for audiences, users. Humans who are on the other end of that screen, poster, or ad. Mike Monterio again:

You don’t work for the people who sign your checks. You work for the people who use the products of your labor. If I were to put my hope in one thing, it’s that you understand the importance of this. Your job is to look out for the people your work is affecting. That is a responsibility we cannot defer.

Page 1 of 2