Skip to content

6 posts tagged with “business ethics”

Tim Berners-Lee, the father of the web who gave away the technology for free, says that we are at an inflection point with data privacy and AI. But before he makes that point, he reminds us that we are the product:

Today, I look at my invention and I am forced to ask: is the web still free today? No, not all of it. We see a handful of large platforms harvesting users’ private data to share with commercial brokers or even repressive governments. We see ubiquitous algorithms that are addictive by design and damaging to our teenagers’ mental health. Trading personal data for use certainly does not fit with my vision for a free web.

On many platforms, we are no longer the customers, but instead have become the product. Our data, even if anonymised, is sold on to actors we never intended it to reach, who can then target us with content and advertising. This includes deliberately harmful content that leads to real-world violence, spreads misinformation, wreaks havoc on our psychological wellbeing and seeks to undermine social cohesion.

And about that fork in the road with AI:

In 2017, I wrote a thought experiment about an AI that works for you. I called it Charlie. Charlie works for you like your doctor or your lawyer, bound by law, regulation and codes of conduct. Why can’t the same frameworks be adopted for AI? We have learned from social media that power rests with the monopolies who control and harvest personal data. We can’t let the same thing happen with AI.

preview-1759201284501.jpg

Why I gave the world wide web away for free

My vision was based on sharing, not exploitation – and here’s why it’s still worth fighting for

theguardian.com icontheguardian.com

In my most recent post, I called out our design profession, for our part in developing these addictive products. Jeffrey Inscho, brings it back up to the tech industry at large and observes they’re actually publishers:

The executives at these companies will tell you they’re neutral platforms, that they don’t choose what content gets seen. This is a lie. Every algorithmic recommendation is an editorial decision. When YouTube’s algorithm suggests increasingly extreme political content to keep someone watching, that’s editorial. When Facebook’s algorithm amplifies posts that generate angry reactions, that’s editorial. When Twitter’s trending algorithms surface conspiracy theories, that’s editorial.

They are publishers. They have always been publishers. They just don’t want the responsibility that comes with being publishers.

His point is that if these social media platforms are sorting and promoting posts, it’s an editorial approach and they should be treated like newspapers. “It’s like a newspaper publisher claiming they’re not responsible for what appears on their front page because they didn’t write the articles themselves.”

The answer, Inscho argues, is regulation of the algorithms.

Turn Off the Internet

Big tech has built machines designed for one thing: to hold …

staticmade.com iconstaticmade.com
Dark red-toned artwork of a person staring into a glowing phone, surrounded by swirling shadows.

Blood in the Feed: Social Media’s Deadly Design

The assassination of Charlie Kirk on September 10, 2025, marked a horrifying inflection point in the growing debate over how digital platforms amplify rage and destabilize politics. As someone who had already stepped back from social media after Trump’s re-election, watching these events unfold from a distance only confirmed my decision. My feeds had become pits of despair, grievances, and overall negativity that didn’t do well for my mental health. While I understand the need to shine a light on the atrocities of Trump and his government, the constant barrage was too much. So I mostly opted out, save for the occasional promotion of my writing.

Kirk’s death feels like the inevitable conclusion of systems we’ve built—systems that reward outrage, amplify division, and transform human beings into content machines optimized for engagement at any cost.

The Mechanics of Disconnection

As it turns out, my behavior isn’t out of the ordinary. People quit social media for various reasons, often situational—seeking balance in an increasingly overwhelming digital landscape. As a participant explained in a research project about social media disconnection:

It was just a build-up of stress and also a huge urge to change things in life. Like, ‘It just can’t go on like this.’ And that made me change a number of things. So I started to do more sports and eat differently, have more social contacts and stop using online media. And instead of sitting behind my phone for two hours in the evening, I read a book and did some work, went to work out, I went to a birthday or a barbecue. I was much more engaged in other things. It just gave me energy. And then I thought, ‘This is good. That’s the way it’s supposed to be. I have to maintain this.’

Sometimes the realization is more visceral—that on these platforms, we are the product. As Jef van de Graaf provocatively puts it:

Every post we make, every friend we invited, every little notification dragging us back into the feed serves one purpose: to extract money from us—and give nothing back but dopamine addiction and mental illness.

While his language is deliberately inflammatory, the sentiment resonates with many who’ve watched their relationship with these platforms sour. As he cautions:

Remember: social media exists because we feed it our lives. We trade our privacy and sanity so VCs and founders can get rich and live like greedy fucking kings.

The Architecture of Rage

The internet was built to connect people and ideas. Even the early iterations of Facebook and Twitter were relatively harmless because the timelines were chronological. But then the makers—product managers, designers, and engineers—of social media platforms began to optimize for engagement and visit duration. Was the birth of the social media algorithm the original sin?

Kevin Roose and Casey Newton explored this question in their Hard Fork episode following Kirk’s assassination, discussing how platforms have evolved to optimize for what they call “borderline content”—material that comes right up to the line of breaking a platform’s policy without quite going over. As Newton observed about Kirk himself:

He excelled at making what some of the platform nerds that I write about would call borderline content. So basically, saying things that come right up to the line of breaking a platform’s policy without quite going over… It turns out that the most compelling thing you can do on social media is to almost break a policy.

Kirk mastered this technique—speculating that vaccines killed millions, calling the Civil Rights Act a mistake, flirting with anti-Semitic tropes while maintaining plausible deniability. He understood the algorithm’s hunger for controversy, and fed it relentlessly. And then, in a horrible irony, he was killed by someone who had likely been radicalized by the very same algorithmic forces he’d helped unleash.

As Roose reflected:

We as a culture are optimizing for rage now. You see it on the social platforms. You see it from politicians calling for revenge for the assassination of Charlie Kirk. You even see it in these individual cases of people getting extremely mad at the person who made a joke about Charlie Kirk that was edgy and tasteless, and going to report them to their employer and get them fired. It’s all this sort of spectacle of rage, this culture of destroying and owning and humiliating.

The Unraveling of Digital Society

Social media and smartphones have fundamentally altered how we communicate and socialize, often at the expense of face-to-face interactions. These technologies have created a market for attention that fuels fear, anger, and political conflict. The research on mental health impacts is sobering: studies found that the introduction of Facebook to college campuses led to measurable increases in depression, accounting for approximately 24 percent of the increased prevalence of severe depression among college students over two decades.

In the wake of Kirk’s assassination, what struck me most was how the platforms immediately transformed tragedy into content. Within hours, there were viral posts celebrating his death, counter-posts condemning those celebrations, organizations collecting databases of “offensive” comments, people losing their jobs, death threats flying in all directions. As Newton noted:

This kind of surveillance and doxxing is essentially a kind of video game that you can play on X. And people like to play video games. And because you’re playing with people’s real lives, it feels really edgy and cool and fun for those who are participating in this.

The human cost is remarkable—teachers, firefighters, military members fired or suspended for comments about Kirk’s death. Many received death threats. Far-right activists called for violence and revenge, doxxing anyone they accused of insufficient mourning.

Blood in the Feed

The last five years have been marked by eruptions of political violence that cannot be separated from the online world that incubated them.

  • The attack on Paul Pelosi (2022). The man who broke into the Speaker of the House Nancy Pelosi’s San Francisco home and fractured her husband’s skull had been marinating in QAnon conspiracies and election denialism online. Extremism experts warned it was a textbook case of how stochastic terrorism—the idea that widespread demonization online can trigger unpredictable acts of violence by individuals—travels from platform rhetoric into a hammer-swinging hand.
  • The Trump assassination attempt (July 2024). A young man opened fire at a rally in Pennsylvania. His social media presence was filled with antisemitic, anti-immigrant content. Within hours, extremist forums were glorifying him as a martyr and calling for more violence.
  • The killing of Minnesota legislator Melissa Hortman and her husband (June 2025). Their murderer left behind a manifesto echoing the language of online white supremacist and anti-abortion communities. He wasn’t a “lone wolf.” He was drawing from the same toxic well of white supremacist and anti-abortion rhetoric that floods online forums. The language of his manifesto wasn’t unique—it was copied, recycled, and amplified in the ideological swamps anyone with a Wi-Fi connection can wander into.

These headline events sit atop a broader wave: the New Orleans truck-and-shooting rampage inspired by ISIS propaganda online (January 2025), the Cybertruck bombing outside Trump’s Los Angeles hotel tied to accelerationist forums—online spaces where extremists argue that violence should be used to hasten the collapse of society (January 2025), and countless smaller assaults on election workers, minority communities, and public officials.

The pattern is depressingly clear. Platforms radicalize, amplify, and normalize the language of violence. Then, someone acts.

The Death of Authenticity

As social media became commoditized—a place to influence and promote consumption—it became less personal and more like TV. The platforms are now being overrun by AI spam and engagement-driven content that drowns out real human connection. As James O’Sullivan notes:

Platforms have little incentive to stem the tide. Synthetic accounts are cheap, tireless and lucrative because they never demand wages or unionize… Engagement is now about raw user attention – time spent, impressions, scroll velocity – and the net effect is an online world in which you are constantly being addressed but never truly spoken to.

Research confirms what users plainly see: tens of thousands of machine-written posts now flood public groups, pushing scams and chasing engagement. Whatever remains of genuine human content is increasingly sidelined by algorithmic prioritization, receiving fewer interactions than the engineered content and AI slop optimized solely for clicks.

The result? Networks that once promised a single interface for the whole of online life are splintering. Users drift toward smaller, slower, more private spaces—group chats, Discord servers, federated microblogs, and email newsletters. A billion little gardens replacing the monolithic, rage-filled public squares that have led to a burst of political violence.

The Designer’s Reckoning

This brings us to design and our role in creating these systems. As designers, are we beginning to reckon with what we’ve wrought?

Jony Ive, reflecting on his own role in creating the smartphone, acknowledges this burden:

I think when you’re innovating, of course, there will be unintended consequences. You hope that the majority will be pleasant surprises. Certain products that I’ve been very involved with, I think there were some unintended consequences that were far from pleasant. My issue is that even though there was no intention, I think there still needs to be responsibility. And that weighs on me heavily.

His words carry new weight after Kirk’s assassination—a death enabled by platforms we designed, algorithms we optimized, engagement metrics we celebrated.

At the recent World Design Congress in London, architect Indy Johar didn’t mince words:

We need ideas and practices that change how we, as humans, relate to the world… Ignoring the climate crisis means you’re an active operator in the genocide of the future.

But we might ask: What about ignoring the crisis of human connection? What about the genocide of civil discourse? Climate activist Tori Tsui’s warning applies equally to our digital architecture saying, “The rest of us are at the mercy of what you decide to do with your imagination.”

Political violence is accelerating and people are dying because of what we did with our imagination. If responsibility weighs heavily, so too must the search for alternatives.

The Possibility of Bridges

There are glimmers of hope in potential solutions. Aviv Ovadya’s concept of “bridging-based algorithms” offers one path forward—systems that actively seek consensus across divides rather than exploiting them. As Casey Newton explains:

They show them to people across the political spectrum… and they only show the note if people who are more on the left and more on the right agree. They see a bridge between the two of you and they think, well, if Republicans and Democrats both think this is true, this is likelier to be true.

But technological solutions alone won’t save us. The participants in social media disconnection studies often report developing better relationships with technology only after taking breaks. One participant explained:

It’s more the overload that I look at it every time, but it doesn’t really satisfy me, that it no longer had any value at a certain point in time. But that you still do it. So I made a conscious choice – a while back – to stop using Facebook.

Designing in the Shadow of Violence

Rob Alderson, in his dispatch from the World Design Congress, puts together a few pieces. Johar suggests design’s role is “desire manufacturing”—not just creating products, but rewiring society to want and expect different versions of the future. As COLLINS co-founder Leland Maschmeyer argued, design is about…

What do we want to do? What do we want to become? How do we get there?’… We need to make another reality as real as possible, inspired by new context and the potential that holds.

The challenge before us isn’t just technical—it’s fundamentally about values and vision. We need to move beyond the Post-it workshops and develop what Johar calls “new competencies” that shape the future.

As I write this, having stepped back from the daily assault of algorithmic rage, I find myself thinking about the Victorian innovators Ive mentioned—companies like Cadbury’s and Fry’s that didn’t just build factories but designed entire towns, understanding that their civic responsibility extended far beyond their products. They recognized that massive societal shifts of moving people from land that they farmed, to cities they lived in for industrial manufacturing, require holistic thinking about how people live and work together.

We stand at a similar inflection point. The tools we’ve created have reshaped human connection in ways that led directly to Charlie Kirk’s assassination. A young man, radicalized online, killed a figure who had mastered the art of online radicalization. The snake devoured its tail on a college campus in Utah, and we all watched it happen in real-time, transforming even this tragedy into content.

The vast majority of Americans, as Newton reminds us, “do not want to participate in a violent cultural war with people who disagree with them.” Yet our platforms are engineered to convince us otherwise, to make civil war feel perpetually imminent, to transform every disagreement into an existential threat.

The Cost of Our Imagination

Perhaps the real design challenge lies not in creating more engaging feeds or stickier platforms, but in designing systems that honor our humanity, foster genuine connection, and help us build the bridges we so desperately need.

Because while these US incidents show how social media incubates lone attackers and small cells, they pale in comparison to Myanmar, where Facebook’s algorithms directly amplified hate speech and incitement, contributing to the deaths of thousands—estimates range from 6,700 to as high as 24,000—and the forced displacement of over 700,000 Rohingya Muslims. That catastrophe made clear: when platforms optimize only for engagement, the result isn’t connection but carnage.

This is our design failure. We built systems that reward extremism, amplify rage, and treat human suffering as engagement. The tools meant to bring us together have instead armed us against each other. And we all bear responsibility for that.

It’s time we imagined something better—before the systems we’ve created finish the job of tearing us apart.

New York Times vs Apple

Mainstream Media Just Don’t Understand

This post was originally published on Medium.

I have longed cringed at how the mainstream media reports on the technology industry to the public. From use of randomly-selected synonyms to just downright misunderstanding of particular technologies, it’s sort of embarrassing to the reporter (usually someone who calls themselves a “technology reporter”) and the publication.

The latest examples come courtesy of The New York Times. I was alerted to this via a piece on BGR by Yoni Heisler titled, “The New York Times’ latest Apple hit piece is embarrassing and downright lazy.” Disclosure: I am a subscriber to The Times because I support their journalism. Their political reporters in particular have done a tremendous service to our country over the last couple of years. I usually trust what The Times writes about politics because I am not an expert in it. But the two pieces mentioned by BGR, about screen-time apps, and the editorial about Apple’s supposed monopoly are downright silly, because I do know a thing or two about technology and Apple.

Privacy, Security, and Violation of Terms

New York Times headline reads “Apple Cracks Down on Apps That Fight iPhone Addiction,” with an illustration of a smartphone screen where app icons are being consumed by a yellow Apple logo shaped like Pac-Man.

The premise of the article is that Apple has “removed or restricted at least 11 of the 17 most downloaded screen-time and parental-control apps” over the past year. There are quotes and POVs from app developers and parents, and there are a couple quotes from Apple defending its actions. This is the full quote of what they printed from Phil Schiller, Apple’s marketing chief:

In response to this article, Philip W. Schiller, Apple’s senior vice president of worldwide marketing, said in emails to some customers that Apple “acted extremely responsibly in this matter, helping to protect our children from technologies that could be used to violate their privacy and security.”

When the article broke, an Apple customer wrote to Tim Cook who had Schiller respond (emphasis mine):

Unfortunately the New York Times article you reference did not share our complete statement, nor explain the risks to children had Apple not acted on their behalf. Apple has long supported providing apps on the App Store, that work like our ScreenTime feature, to help parents manage their children’s access to technology and we will continue to encourage development of these apps. There are many great apps for parents on the App Store, like “Moment — Balance Screen Time” by Moment Health and “Verizon Smart Family” by Verizon Wireless.

However, over the last year we became aware that some parental management apps were using a technology called Mobile Device Management or “MDM” and installing an MDM Profile as a method to limit and control use of these devices. MDM is a technology that gives one party access to and control over many devices, it was meant to be used by a company on it’s own mobile devices as a management tool, where that company has a right to all of the data and use of the devices. The MDM technology is not intended to enable a developer to have access to and control over consumers’ data and devices, but the apps we removed from the store did just that. No one, except you, should have unrestricted access to manage your child’s device, know their location, track their app use, control their mail accounts, web surfing, camera use, network access, and even remotely erase their devices. Further, security research has shown that there is risk that MDM profiles could be used as a technology for hacker attacks by assisting them in installing apps for malicious purposes on users’ devices.

When the App Store team investigated the use of MDM technology by some developers of apps for managing kids devices and learned the risk they create to user privacy and security, we asked these developers to stop using MDM technology in their apps. Protecting user privacy and security is paramount in the Apple ecosystem and we have important App Store guidelines to not allow apps that could pose a threat to consumers privacy and security. We will continue to provide features, like ScreenTime, designed to help parents manage their children’s access to technology and we will work with developers to offer many great apps on the App Store for these uses, using technologies that are safe and private for us and our children.

Here is a layman’s summary of the above. The parental control apps that Apple kicked off the App Store were using a technology intended for large corporations to control their company-owned devices. This technology gives the corporations purview over, and access to all their devices’ location info, app usage, email accounts, web history, camera usage, and network access, and can remotely wipe their devices. In other words if an end-user installed one of these parental control apps that used MDM technology on their child’s phone, that app developer had access to all that information, and would be able to control and wipe that phone.

The Times and the general media have been lambasting YouTube, Facebook, and Twitter for allowing users and other entities to stay on their platforms and abuse their policies. From hate speech (white nationalists, Alex Jones) to huge privacy gaffes (Cambridge Analytica), the media and the public have demanded these companies take responsibility and action, and prevent such episodes from happening again.

And yet, when Apple does take action here—when about a dozen or so companies release so-called parental control apps on the App Store using Apple technology in a way that violates its policies and gives access to thousands of iPhones belonging to kids, The New York Times has a conniption.

Apple does not have an issue that there are apps that compete with its own apps in the App Store (Apple Music vs. Spotify, Pandora, SiriusXM, iHeartRadio, SoundCloud; Safari vs. Google Chrome, Opera Mini, Firefox Focus). That’s what a vibrant developer-centric marketplace is: competition. But when an app violates its policies, Apple should be able to act.

Who Owns a Marketplace?

New York Times opinion headline reads “Why Does Apple Control Its Competitors?” with an illustration of a person staring into a phone screen filled with Apple logos, while a large hand covers their head, symbolizing control and dominance.

The second piece referenced by the BGR article is an op-ed by The Times’ Editorial Board, “Why Does Apple Control Its Competitors?

First the op-ed compares Apple to Microsoft. Kind of ironic since Apple was on the brink of bankruptcy in the 1990s:

Apple’s management of the App Store is also dangerously reminiscent of the anti-competitive behavior that triggered United States v. Microsoft, a landmark antitrust case that changed the landscape of the tech industry.

Then the piece complains about Apple’s control over the App Store marketplace and its fees:

But Apple’s operating system for mobile devices makes it almost impossible to get an app outside of the App Store that will work on an Apple phone. Ultimately, the company controls what a user can or cannot do on their own iPhone. Apple also takes up to a 30 percent cut of in-app revenue, including revenue from “services” fulfilled in-app — like buying a premium subscription or an ebook. Because all apps go through the App Store, this 30 percent cut is nearly unavoidable.

Apple doesn’t ban apps like Amazon Kindle or Spotify, which compete with Apple Books and Apple Music, respectively. But the 30 percent fee still stings. That’s the reason you currently can’t buy a Kindle ebook through the Kindle mobile app. Spotify, a much smaller company, now pays Apple between 15 and 30 percent of its in-app revenue in order to serve streaming music to its premium subscribers. Spotify recently filed a complaint with European regulators, accusing Apple of anticompetitive practices. In the United States, an antitrust lawsuit pending before the Supreme Court alleges that the 30 percent cut drives up prices for consumers.

Let’s try this analogy. Suppose Apple is a department store. This department store has products lined up on shelves and racks, organized by sections and aisles so customers can find what they’re looking for. This store has endcaps that feature certain promoted products. What The Times is asking is this: Why can’t any company that wants to sell in Apple’s department store just set up shop inside the store and sell direct? And why should Apple make 30% gross profit on selling each item in the store?¹

If Apple allowed any product inside its department store, what if the product is shoddy and doesn’t work? What if there’s a safety issue? What if a seller wants to line the shelves with porno magazines? This tarnishes the reputation of Apple’s store. Customers will start to think Apple doesn’t care about the quality of products it sells, or that the store is now inhospitable for families with children.

(If the department store analogy doesn’t work for you, what about a farmers’ market? The App Store is the market and the developers are the vendors. Organizers of farmers’ markets decide who to allow in to sell, and what they can sell.)

A core of The Times’ monopoly argument seems to be around the fact that the App Store is the only place to get apps that work on an iPhone:

But Apple’s operating system for mobile devices makes it almost impossible to get an app outside of the App Store that will work on an Apple phone. Ultimately, the company controls what a user can or cannot do on their own iPhone.

True. But time and again Apple has argued that its thorough review process is critical to prevent malicious and low quality apps from appearing in its App Store.

And consumers face compatibility or “walled garden” scenarios with lots of things they buy. Printers need specific toner cartridges. Coffeemakers only accept certain shapes of pods. Video game systems can only their own cartridges or discs. In all these examples, including iPhone, consumers can find workarounds. But use at your own risk and don’t go crying to the manufacturer if your product breaks as a result. Fair enough, right?

And finally…

Even if we take Apple at its word that it was only protecting the privacy and security of its users by removing screen-time and parental-control apps, the state of the app marketplace is troubling. Why is a company — with no mechanism for democratic oversight — the primary and most zealous guardian of user privacy and security? Why is one company in charge of vetting what users can or cannot do on their phones, especially when that company also makes apps that compete in a marketplace that it controls?

Short answer is because Apple created the marketplace and controls the rules. eBay is a marketplace and controls its own policies, banning the selling of body parts, government IDs, and Nazi-related artifacts. Airbnb, Uber, and Upwork are all marketplaces with their own policies.² Why is Airbnb — with no mechanism for democratic oversight — the primary and most zealous guardian of what hosts can and cannot do with their listing? Why is Uber — with no mechanism for democratic oversight — the primary and most zealous guardian of how drivers should behave with their passengers? Why is Upwork — with no mechanism for democratic oversight — the primary and most zealous guardian of how workers should interact with clients?

But again, I go back to the two-sidedness of The Times. In one breath it wants Facebook and “Big Tech” to be better at preventing the viral spread of horrific imagery, by removing posts with said imagery, thus having a tighter grasp on its own platform. And in another it wants Apple to loosen up its control of its own marketplace. Comparing the spread of a video of a mass shooting with App Store policies is crass, I know. But they are sending mixed messages to Silicon Valley.

I actually agree with The Times about how Facebook and other social media platforms must somehow use technology to combat humanity’s most wretched behaviors. They do need to figure out a way to reign in the monster they’ve unleashed.

However, I disagree with their view that Apple must relax its tight control over the App Store, because I want a company that has been the most socially responsible in this age of Big Tech to curate over two million apps in the App Store and prevent me from downloading an app that could brick my phone or expose my private data.

The press plays an important role in our society—to hold powerful entities and individuals to account. However, before lobbing any accusations, before sparking any debate, it really should get its facts straight and understand the material first.


[1] Most retailers mark up the products they sell by 50%. Example: They buy a tube of toothpaste for $2 and sell it to you for $4.

[2] By the way, here are the sellers’ fees for each of these marketplaces: