Skip to content

66 posts tagged with “tech industry”

Ian Dean, writing for Creative Bloq, revisits the impact the original TRON movie had on visual effects and the design industry. The film was not nominated for an Oscar for visual effects as the Academy’s members claimed that “using computers was ‘cheating.’” Little did they know it was only the beginning of a revolution.

More than four decades later, TRON still feels like a moment the film industry stopped and changed direction, just as it had done years earlier when Oz was colourised and Mary Poppins danced with animated animals.

Dean asks, now what about AI-powered visual effects? Runway and Sora are only the beginning.

The TRON Oscar snub that predicted today’s AI in filmmaking

The TRON Oscar snub that predicted today’s AI in filmmaking

What we can learn from the 1982 film’s frosty reception.

creativebloq.com iconcreativebloq.com

In the scenario “AI 2027,” the authors argue that by October 2027—exactly two years from now—we will be at an inflection point. Race to build the superintelligence, or slow down the pace to fix misalignment issues first.

In a piece by Derek Thompson in The Argument, he takes a different predicted AI doomsday date—18 months—and argues:

The problem of the next 18 months isn’t AI disemploying all workers, or students losing competition after competition to nonhuman agents. The problem is whether we will degrade our own capabilities in the presence of new machines. We are so fixated on how technology will outskill us that we miss the many ways that we can deskill ourselves.

Degrading our own capabilities includes writing:

The demise of writing matters because writing is not a second thing that happens after thinking. The act of writing is an act of thinking. This is as true for professionals as it is for students. In “Writing is thinking,” an editorial in Nature, the authors argued that “outsourcing the entire writing process to LLMs” deprives scientists of the important work of understanding what they’ve discovered and why it matters.

The decline of writing and reading matters because writing and reading are the twin pillars of deep thinking, according to Cal Newport, a computer science professor and the author of several bestselling books, including Deep Work. The modern economy prizes the sort of symbolic logic and systems thinking for which deep reading and writing are the best practice.

More depressing trends to add to the list.

“You have 18 months”

“You have 18 months”

The real deadline isn’t when AI outsmarts us — it’s when we stop using our own minds.

theargumentmag.com icontheargumentmag.com

Is the AI bubble about to burst? Apparently, AI prompt-to-code tools like Lovable and v0 have peaked and are on their way down.

Alistair Barr writing for Business Insider:

The drop-off raises tough questions for startups that flaunted exponential annual recurring revenue growth just months ago. Analysts wrote that much of that revenue comes from month-to-month subscribers who may churn as quickly as they signed up, putting the durability of those flashy numbers in doubt.

Barr interviewed Eric Simons, CEO of Bolt who said:

“This is the problem across all these companies right now. The churn rate for everyone is really high,” Simons said. “You have to build a retentive business.”

AI vibe coding tools were supposed to change everything. Now traffic is crashing.

AI vibe coding tools were supposed to change everything. Now traffic is crashing.

Vibe coding tools have seen traffic drop, with Vercel’s v0 and Lovable seeing significant declines, raising sustainability questions, Barclays warns.

businessinsider.com iconbusinessinsider.com

In an announcement to users this morning, Visual Electric said they were being acquired by Perplexity—or more accurately, the team that makes Visual Electric will be hired by Perplexity. The service will shut down in the next 90 days.

Today we’re sharing the next step in Visual Electric’s journey: we’ve been acquired by Perplexity. This is a milestone that marks both an exciting opportunity for our team and some big changes for our product.

Over the next 90 days we’ll be sunsetting Visual Electric, and our team will be forming a new Agent Experiences group at Perplexity.

While we’ve seen acquihires and shutdowns in either the AI infrastructure space (e.g., Scale AI) or coding space (e.g., Windsurf), I don’t believe we’ve seen one in the image or video gen AI space have an exit event like this yet. Obviously, The Browser Company announced their acquisition by Atlassian last month.

I believe building gen AI tools at this moment is incredibly competitive. I think it takes an even stronger stomached entrepreneur than in the pre-ChatGPT moment. So kudos for the folks at Visual Electric for having a good outcome and getting to continue to do their work at Perplexity. But I do think this is not the last that we’ll see consolidation in this space.

preview-1759365436869-1200x630.png

Visual Electric is Joining Perplexity

Today we’re sharing the next step in Visual Electric’s journey: we’ve been acquired by Perplexity. This is a milestone that marks both an exciting opportunity for our team and some big changes for our product.

visualelectric.com iconvisualelectric.com

Tim Berners-Lee, the father of the web who gave away the technology for free, says that we are at an inflection point with data privacy and AI. But before he makes that point, he reminds us that we are the product:

Today, I look at my invention and I am forced to ask: is the web still free today? No, not all of it. We see a handful of large platforms harvesting users’ private data to share with commercial brokers or even repressive governments. We see ubiquitous algorithms that are addictive by design and damaging to our teenagers’ mental health. Trading personal data for use certainly does not fit with my vision for a free web.

On many platforms, we are no longer the customers, but instead have become the product. Our data, even if anonymised, is sold on to actors we never intended it to reach, who can then target us with content and advertising. This includes deliberately harmful content that leads to real-world violence, spreads misinformation, wreaks havoc on our psychological wellbeing and seeks to undermine social cohesion.

And about that fork in the road with AI:

In 2017, I wrote a thought experiment about an AI that works for you. I called it Charlie. Charlie works for you like your doctor or your lawyer, bound by law, regulation and codes of conduct. Why can’t the same frameworks be adopted for AI? We have learned from social media that power rests with the monopolies who control and harvest personal data. We can’t let the same thing happen with AI.

preview-1759201284501.jpg

Why I gave the world wide web away for free

My vision was based on sharing, not exploitation – and here’s why it’s still worth fighting for

theguardian.com icontheguardian.com

In my most recent post, I called out our design profession, for our part in developing these addictive products. Jeffrey Inscho, brings it back up to the tech industry at large and observes they’re actually publishers:

The executives at these companies will tell you they’re neutral platforms, that they don’t choose what content gets seen. This is a lie. Every algorithmic recommendation is an editorial decision. When YouTube’s algorithm suggests increasingly extreme political content to keep someone watching, that’s editorial. When Facebook’s algorithm amplifies posts that generate angry reactions, that’s editorial. When Twitter’s trending algorithms surface conspiracy theories, that’s editorial.

They are publishers. They have always been publishers. They just don’t want the responsibility that comes with being publishers.

His point is that if these social media platforms are sorting and promoting posts, it’s an editorial approach and they should be treated like newspapers. “It’s like a newspaper publisher claiming they’re not responsible for what appears on their front page because they didn’t write the articles themselves.”

The answer, Inscho argues, is regulation of the algorithms.

Turn Off the Internet

Big tech has built machines designed for one thing: to hold …

staticmade.com iconstaticmade.com

I’m happy that the conversation around the design talent crisis continues. Carly Ayres, writing for It’s Nice That picks up the torch and speaks to designers and educators about this topic. What struck me—and I think what adds to the dialogue—is the notion of the belief gap. Ayres spoke with Naheel Jawaid, founder of Silicon Valley School of Design, about it:

“A big part of what I do is just being a coach, helping someone see their potential when they don’t see it yet,” Naheel says. “I’ve had people tell me later that a single conversation changed how they saw themselves.”

In the past, belief capital came from senior designers taking juniors under their wing. Today, those same seniors are managing instability of their own. “It’s a bit of a ‘dog eat dog world’-type vibe,” Naheel says. “It’s really hard to get mentorship right now.”

The whole piece is great. Tighter than my sprawling three-parter. I do think there’s a piece missing though. While Ayres highlights the issue and offers suggestions from designer leaders, businesses need to step up and do something about the issue—i.e., hire more juniors. Us recognizing it is the first step.

preview-1758774248502.png

Welcome to the entry-level void: what happens when junior design jobs disappear?

Entry-level jobs are disappearing. In their place: unpaid gigs, cold DMs and self-starters scrambling for a foothold. The ladder’s gone – what’s replacing it, and who’s being left behind?

itsnicethat.com iconitsnicethat.com
Dark red-toned artwork of a person staring into a glowing phone, surrounded by swirling shadows.

Blood in the Feed: Social Media’s Deadly Design

The assassination of Charlie Kirk on September 10, 2025, marked a horrifying inflection point in the growing debate over how digital platforms amplify rage and destabilize politics. As someone who had already stepped back from social media after Trump’s re-election, watching these events unfold from a distance only confirmed my decision. My feeds had become pits of despair, grievances, and overall negativity that didn’t do well for my mental health. While I understand the need to shine a light on the atrocities of Trump and his government, the constant barrage was too much. So I mostly opted out, save for the occasional promotion of my writing.

Kirk’s death feels like the inevitable conclusion of systems we’ve built—systems that reward outrage, amplify division, and transform human beings into content machines optimized for engagement at any cost.

The Mechanics of Disconnection

As it turns out, my behavior isn’t out of the ordinary. People quit social media for various reasons, often situational—seeking balance in an increasingly overwhelming digital landscape. As a participant explained in a research project about social media disconnection:

It was just a build-up of stress and also a huge urge to change things in life. Like, ‘It just can’t go on like this.’ And that made me change a number of things. So I started to do more sports and eat differently, have more social contacts and stop using online media. And instead of sitting behind my phone for two hours in the evening, I read a book and did some work, went to work out, I went to a birthday or a barbecue. I was much more engaged in other things. It just gave me energy. And then I thought, ‘This is good. That’s the way it’s supposed to be. I have to maintain this.’

Sometimes the realization is more visceral—that on these platforms, we are the product. As Jef van de Graaf provocatively puts it:

Every post we make, every friend we invited, every little notification dragging us back into the feed serves one purpose: to extract money from us—and give nothing back but dopamine addiction and mental illness.

While his language is deliberately inflammatory, the sentiment resonates with many who’ve watched their relationship with these platforms sour. As he cautions:

Remember: social media exists because we feed it our lives. We trade our privacy and sanity so VCs and founders can get rich and live like greedy fucking kings.

The Architecture of Rage

The internet was built to connect people and ideas. Even the early iterations of Facebook and Twitter were relatively harmless because the timelines were chronological. But then the makers—product managers, designers, and engineers—of social media platforms began to optimize for engagement and visit duration. Was the birth of the social media algorithm the original sin?

Kevin Roose and Casey Newton explored this question in their Hard Fork episode following Kirk’s assassination, discussing how platforms have evolved to optimize for what they call “borderline content”—material that comes right up to the line of breaking a platform’s policy without quite going over. As Newton observed about Kirk himself:

He excelled at making what some of the platform nerds that I write about would call borderline content. So basically, saying things that come right up to the line of breaking a platform’s policy without quite going over… It turns out that the most compelling thing you can do on social media is to almost break a policy.

Kirk mastered this technique—speculating that vaccines killed millions, calling the Civil Rights Act a mistake, flirting with anti-Semitic tropes while maintaining plausible deniability. He understood the algorithm’s hunger for controversy, and fed it relentlessly. And then, in a horrible irony, he was killed by someone who had likely been radicalized by the very same algorithmic forces he’d helped unleash.

As Roose reflected:

We as a culture are optimizing for rage now. You see it on the social platforms. You see it from politicians calling for revenge for the assassination of Charlie Kirk. You even see it in these individual cases of people getting extremely mad at the person who made a joke about Charlie Kirk that was edgy and tasteless, and going to report them to their employer and get them fired. It’s all this sort of spectacle of rage, this culture of destroying and owning and humiliating.

The Unraveling of Digital Society

Social media and smartphones have fundamentally altered how we communicate and socialize, often at the expense of face-to-face interactions. These technologies have created a market for attention that fuels fear, anger, and political conflict. The research on mental health impacts is sobering: studies found that the introduction of Facebook to college campuses led to measurable increases in depression, accounting for approximately 24 percent of the increased prevalence of severe depression among college students over two decades.

In the wake of Kirk’s assassination, what struck me most was how the platforms immediately transformed tragedy into content. Within hours, there were viral posts celebrating his death, counter-posts condemning those celebrations, organizations collecting databases of “offensive” comments, people losing their jobs, death threats flying in all directions. As Newton noted:

This kind of surveillance and doxxing is essentially a kind of video game that you can play on X. And people like to play video games. And because you’re playing with people’s real lives, it feels really edgy and cool and fun for those who are participating in this.

The human cost is remarkable—teachers, firefighters, military members fired or suspended for comments about Kirk’s death. Many received death threats. Far-right activists called for violence and revenge, doxxing anyone they accused of insufficient mourning.

Blood in the Feed

The last five years have been marked by eruptions of political violence that cannot be separated from the online world that incubated them.

  • The attack on Paul Pelosi (2022). The man who broke into the Speaker of the House Nancy Pelosi’s San Francisco home and fractured her husband’s skull had been marinating in QAnon conspiracies and election denialism online. Extremism experts warned it was a textbook case of how stochastic terrorism—the idea that widespread demonization online can trigger unpredictable acts of violence by individuals—travels from platform rhetoric into a hammer-swinging hand.
  • The Trump assassination attempt (July 2024). A young man opened fire at a rally in Pennsylvania. His social media presence was filled with antisemitic, anti-immigrant content. Within hours, extremist forums were glorifying him as a martyr and calling for more violence.
  • The killing of Minnesota legislator Melissa Hortman and her husband (June 2025). Their murderer left behind a manifesto echoing the language of online white supremacist and anti-abortion communities. He wasn’t a “lone wolf.” He was drawing from the same toxic well of white supremacist and anti-abortion rhetoric that floods online forums. The language of his manifesto wasn’t unique—it was copied, recycled, and amplified in the ideological swamps anyone with a Wi-Fi connection can wander into.

These headline events sit atop a broader wave: the New Orleans truck-and-shooting rampage inspired by ISIS propaganda online (January 2025), the Cybertruck bombing outside Trump’s Los Angeles hotel tied to accelerationist forums—online spaces where extremists argue that violence should be used to hasten the collapse of society (January 2025), and countless smaller assaults on election workers, minority communities, and public officials.

The pattern is depressingly clear. Platforms radicalize, amplify, and normalize the language of violence. Then, someone acts.

The Death of Authenticity

As social media became commoditized—a place to influence and promote consumption—it became less personal and more like TV. The platforms are now being overrun by AI spam and engagement-driven content that drowns out real human connection. As James O’Sullivan notes:

Platforms have little incentive to stem the tide. Synthetic accounts are cheap, tireless and lucrative because they never demand wages or unionize… Engagement is now about raw user attention – time spent, impressions, scroll velocity – and the net effect is an online world in which you are constantly being addressed but never truly spoken to.

Research confirms what users plainly see: tens of thousands of machine-written posts now flood public groups, pushing scams and chasing engagement. Whatever remains of genuine human content is increasingly sidelined by algorithmic prioritization, receiving fewer interactions than the engineered content and AI slop optimized solely for clicks.

The result? Networks that once promised a single interface for the whole of online life are splintering. Users drift toward smaller, slower, more private spaces—group chats, Discord servers, federated microblogs, and email newsletters. A billion little gardens replacing the monolithic, rage-filled public squares that have led to a burst of political violence.

The Designer’s Reckoning

This brings us to design and our role in creating these systems. As designers, are we beginning to reckon with what we’ve wrought?

Jony Ive, reflecting on his own role in creating the smartphone, acknowledges this burden:

I think when you’re innovating, of course, there will be unintended consequences. You hope that the majority will be pleasant surprises. Certain products that I’ve been very involved with, I think there were some unintended consequences that were far from pleasant. My issue is that even though there was no intention, I think there still needs to be responsibility. And that weighs on me heavily.

His words carry new weight after Kirk’s assassination—a death enabled by platforms we designed, algorithms we optimized, engagement metrics we celebrated.

At the recent World Design Congress in London, architect Indy Johar didn’t mince words:

We need ideas and practices that change how we, as humans, relate to the world… Ignoring the climate crisis means you’re an active operator in the genocide of the future.

But we might ask: What about ignoring the crisis of human connection? What about the genocide of civil discourse? Climate activist Tori Tsui’s warning applies equally to our digital architecture saying, “The rest of us are at the mercy of what you decide to do with your imagination.”

Political violence is accelerating and people are dying because of what we did with our imagination. If responsibility weighs heavily, so too must the search for alternatives.

The Possibility of Bridges

There are glimmers of hope in potential solutions. Aviv Ovadya’s concept of “bridging-based algorithms” offers one path forward—systems that actively seek consensus across divides rather than exploiting them. As Casey Newton explains:

They show them to people across the political spectrum… and they only show the note if people who are more on the left and more on the right agree. They see a bridge between the two of you and they think, well, if Republicans and Democrats both think this is true, this is likelier to be true.

But technological solutions alone won’t save us. The participants in social media disconnection studies often report developing better relationships with technology only after taking breaks. One participant explained:

It’s more the overload that I look at it every time, but it doesn’t really satisfy me, that it no longer had any value at a certain point in time. But that you still do it. So I made a conscious choice – a while back – to stop using Facebook.

Designing in the Shadow of Violence

Rob Alderson, in his dispatch from the World Design Congress, puts together a few pieces. Johar suggests design’s role is “desire manufacturing”—not just creating products, but rewiring society to want and expect different versions of the future. As COLLINS co-founder Leland Maschmeyer argued, design is about…

What do we want to do? What do we want to become? How do we get there?’… We need to make another reality as real as possible, inspired by new context and the potential that holds.

The challenge before us isn’t just technical—it’s fundamentally about values and vision. We need to move beyond the Post-it workshops and develop what Johar calls “new competencies” that shape the future.

As I write this, having stepped back from the daily assault of algorithmic rage, I find myself thinking about the Victorian innovators Ive mentioned—companies like Cadbury’s and Fry’s that didn’t just build factories but designed entire towns, understanding that their civic responsibility extended far beyond their products. They recognized that massive societal shifts of moving people from land that they farmed, to cities they lived in for industrial manufacturing, require holistic thinking about how people live and work together.

We stand at a similar inflection point. The tools we’ve created have reshaped human connection in ways that led directly to Charlie Kirk’s assassination. A young man, radicalized online, killed a figure who had mastered the art of online radicalization. The snake devoured its tail on a college campus in Utah, and we all watched it happen in real-time, transforming even this tragedy into content.

The vast majority of Americans, as Newton reminds us, “do not want to participate in a violent cultural war with people who disagree with them.” Yet our platforms are engineered to convince us otherwise, to make civil war feel perpetually imminent, to transform every disagreement into an existential threat.

The Cost of Our Imagination

Perhaps the real design challenge lies not in creating more engaging feeds or stickier platforms, but in designing systems that honor our humanity, foster genuine connection, and help us build the bridges we so desperately need.

Because while these US incidents show how social media incubates lone attackers and small cells, they pale in comparison to Myanmar, where Facebook’s algorithms directly amplified hate speech and incitement, contributing to the deaths of thousands—estimates range from 6,700 to as high as 24,000—and the forced displacement of over 700,000 Rohingya Muslims. That catastrophe made clear: when platforms optimize only for engagement, the result isn’t connection but carnage.

This is our design failure. We built systems that reward extremism, amplify rage, and treat human suffering as engagement. The tools meant to bring us together have instead armed us against each other. And we all bear responsibility for that.

It’s time we imagined something better—before the systems we’ve created finish the job of tearing us apart.

Still from a video shown at Apple Keynote 2025. Split screen of AirPods Pro connection indicator on left, close-up of earbuds in charging case on right.

Notes About the September 2025 Apple Event

Today’s Apple keynote opened with a classic quote from Steve Jobs.

Steve Jobs quote at Apple Keynote 2025 – Black keynote slide with white text: “Design is not just what it looks like and feels like. Design is how it works.” – Steve Jobs.

Then a video played, focused on the fundamental geometric shapes that can be found in Apple’s products: circles in the HomePod, iPhone shutter button, iPhone camera, MagSafe charging ring, Digital Crown on Apple Watch; rounded squares in the charging block, Home scene button, Mac mini, keycaps, Finder icon, FaceID; to the lozenges found in the AirPods case, MagSafe port, Liquid Glass carousel control, and the Action button on Apple Watch Ultra.

Then Tim Cook repeated the notion in his opening remarks:

At Apple, design has always been fundamental to who we are and what we do. For us, design goes beyond just how something looks or feels. Design is also how it works. This philosophy guides everything we do, including the products we’re going to introduce today and the experiences they provide.

Apple announced a bunch of products today, including:

  • AirPods Pro 3 with better active noise canceling, live translation, and heart rate sensing (more below)
  • Apple Watch Series 11, thinner and with hypertension alerts and sleep score
  • iPhone 17 with a faster chip and better camera (as always)
  • iPhone Air at 5.6 mm thin! They packed all the main components into a new full-width camera “plateau” (I guess that’s the new word for camera bump)
  • iPhone 17 Pro / Pro Max with a faster chip and even better camera (as always), along with unibody construction and cool vapor cooling (like liquid cooling, but with vapor), and a beefy camera plateau

Highlights

Live Translation is Star Trek’s Universal Translator

In the Star Trek universe, humans regularly speak English with aliens and the audience hears those same aliens reply in English. Of course, it’s television and it was always explained away—that a “universal translator” is embedded in the comm badge all Starfleet crew members wear.

Apple Keynote 2025 iPhone Live Translation feature – Woman holds up an iPhone displaying translated text, demonstrating Apple Intelligence with AirPods Pro 3.

With AirPods Pro 3, this is becoming real! In one demo video, Apple shows a woman at a market. She’s shopping and hears a vendor speak to her in Spanish. Through her AirPods, she hears the live translation and can reply in English and have that translated back to Spanish on her iPhone. Then, in another scene, two guys are talking—English and Italian—and they’re both wearing the new AirPods and having a seamless conversation. Amazing.

Apple Keynote 2025 AirPods Pro 3 Live Translation demo at café – Man wearing AirPods Pro 3 sits outdoors at a café table, smiling while testing real-time language translation.

Heart Rate Monitoring in AirPods

Apple is extending its fitness tracking features to AirPods, specifically the new AirPods Pro 3. These come with a new sensor that pulses invisible infrared light at 256 times per second to detect blood flow and calculate heart rate. I’m always astonished by how Apple keeps extending the capabilities of its devices to push health and fitness metrics, which—at least their thesis goes—helps with overall wellbeing. (See below.)

Full-Width Camera Bump

Or, the new camera plateau. I actually prefer the full width over just the bump. I feel like the plain camera bump on my iPhone 16 Pro makes the phone too wobbly when I put it on its back. I think a bump that spans the full width of the phone will make it more stable. This new design is on the new iPhone Air and iPhone 17 Pro.

To Air or Not to Air?

I’m on the iPhone Upgrade Program so I can get a new phone each year—and I have for that last few. I’m wondering if I want to get the Air this time. One thing I dislike about the iPhone Pros is their weight. The Pro is pretty heavy and I can feel it in my hand after prolonged use. At 165 grams, the Air is 17% lighter than the 16 Pro (199 grams). It might make a difference.

Overall Thoughts

Of course, in 2025, it’s a little striking that Apple didn’t mention much about AI. Apple framed AI not as a standalone product but as an invisible layer woven through AirPods, Watch, and iPhone—from Live Translation and Workout Buddy nudges to on-device models powering health insights and generative photo features. Instead of prompts and chatbots, Apple Intelligence showed up as contextual, ambient assistance designed to disappear into the flow of everyday use. And funnily enough, iOS 26 was mentioned in passing, as if Apple assumed everyone watching had seen the prior episode—er, keynote—in June.

It’s interesting that the keynote opened with that Steve Jobs quote about design. Maybe someone in Cupertino read my piece breaking down Liquid Glass where I argued:

People misinterpret this quote all the time to mean design is only how it works. That is not what Steve meant. He meant, design is both what it looks like and how it works.

(Actually, it was probably what Casey Newton wrote in Platformer about Liquid Glass.) 

If you step back and consider why Apple improves its hardware and software every year, it goes back to their implied mission: to make products that better human lives. This is exemplified by the “Dear Apple” spot they played as part of the segment on Apple Watch.

Play

Apple’s foray into wearables—beyond ear- and headphones—with Apple Watch ten years ago was really an entry into health technology. Lives have been saved and people have gotten healthier because Apple technology enabled them. Dr. Sumbul Ahmad Desai, VP of Health mentioned their new hypertension detection feature could notify over one million people with undiagnosed hypertension in its first year. Apple developed this feature using advanced machine learning, drawing on training data from multiple studies that involved over 100,000 participants. Then they clinically validated it in a separate study of over 2,000 participants. In other words, they’ve become a real force is shaping health tech.

And what also amazes me, is that now AirPods Pro 3 will help with health and fitness tracking. (See above.)

There’s no doubt that Apple’s formal design is always top-notch. But it’s great to be reminded of their why and how these must-buy-by-Christmas devices are capable of solving real world problems and bettering our lives. (And no, I don’t think having a lighter, thinner, faster, cooler phone falls into this category. We can have both moral purpose and commercial purpose.)

Josh Miller, CEO, and Hursh Agrawal, CTO, of The Browser Company:

Today, The Browser Company of New York is entering into an agreement to be acquired by Atlassian in an all-cash transaction. We will operate independently, with Dia as our focus. Our objective is to bring Dia to the masses.

Super interesting acquisition here. There is zero overlap as far as I can tell. Atlassian’s move is out of left-field. Dia’s early users were college students. The Browser Company more recently opened it up to former Arc users. Is this bet for Atlassian—the company that makes tech-company-focused products like Jira and Confluence—around the future of work and collaboration? Is this their first move against Salesforce? 🤔

preview-1757007229906.jpeg

Your Tuesday in 2030

Or why The Browser Company is being acquired to bring Dia to the masses.

open.substack.com iconopen.substack.com

Simon Sherwood, writing in The Register:

Amazon Web Services CEO Matt Garman has suggested firing junior workers because AI can do their jobs is “the dumbest thing I’ve ever heard.”

Garman made that remark in conversation with AI investor Matthew Berman, during which he talked up AWS’s Kiro AI-assisted coding tool and said he’s encountered business leaders who think AI tools “can replace all of our junior people in our company.”

That notion led to the “dumbest thing I’ve ever heard” quote, followed by a justification that junior staff are “probably the least expensive employees you have” and also the most engaged with AI tools.

“How’s that going to work when ten years in the future you have no one that has learned anything,” he asked. “My view is you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it, just as much as you ever have.”

Yup. I agree.

preview-1756189648262.jpg

AWS CEO says AI replacing junior staff is 'dumbest idea'

They're cheap and grew up with AI … so you're firing them why?

theregister.com icontheregister.com

Jessica Davies reports that new publisher data suggests that some sites are getting 25% less traffic from Google than the previous year.

Writing in Digiday:

Organic search referral traffic from Google is declining broadly, with the majority of DCN member sites — spanning both news and entertainment — experiencing traffic losses from Google search between 1% and 25%. Twelve of the respondent companies were news brands, and seven were non-news.

Jason Kint, CEO of DCN, says that this is a “direct consequence of Google AI Overviews.”

I wrote previously about the changing economics of the web here, here, and here.

And related, Eric Mersch writes in a LinkedIn post that Monday.com’s stock fell 23% because co-CEO Roy Mann said, “We are seeing some softness in the market due to Google algorithm,” during their Q2 earnings call and the analysts just kept hammering him and the CFO about how the algo changes might affect customer acquisition.

Analysts continued to press the issue, which caught company management completely off guard. Matthew Bullock from Bank of America Merrill Lynch asked frankly, “And then help us understand, why call this out now? How did the influence of Google SEO disruption change this quarter versus 1Q, for example?” The CEO could only respond, “So look, I think like we said, we optimize in real-time. We just budget daily,” implying that they were not aware of the problem until they saw Q2 results.

This is the first public sign that the shift from Google to AI-powered searches is having an impact.

preview-1755493440980.jpg

Google AI Overviews linked to 25% drop in publisher referral traffic, new data shows

The majority of Digital Content Next publisher members are seeing traffic losses from Google search between 1% and 25% due to AI Overviews.

digiday.com icondigiday.com

I enjoyed this interview with Notion’s CEO, Ivan Zhao over at the Decoder podcast, with substitute host, Casey Newton. What I didn’t quite get when I first used Notion was the “LEGO” aspect of it. Their vision is to build business software that is highly malleable and configurable to do all sorts of things. Here’s Zhao:

Well, because it didn’t quite exist with software. If you think about the last 15 years of [software-as-a-service], it’s largely people building vertical point solutions. For each buyer, for each point, that solution sort of makes sense. The way we describe it is that it’s like a hard plastic solution for your problem, but once you have 20 different hard plastic solutions, they sort of don’t fit well together. You cannot tinker with them. As an end user, you have to jump between half a dozen of them each day.

That’s not quite right, and we’re also inspired by the early computing pioneers who in the ‘60s and ‘70s thought that computing should be more LEGO-like rather than like hard plastic. That’s what got me started working on Notion a long time ago, when I was reading a computer science paper back in college.

From a user experience POV, Notion is both simple and exceedingly complicated. Taking notes is easy. Building the system for a workflow, not so much.

In the second half, Newton (gently) presses Zhao on the impact of AI on the workforce and how productivity software like Notion could replace headcount.

Newton: Do you think that AI and Notion will get to a point where executives will hire fewer people, because Notion will do it for them? Or are you more focused on just helping people do their existing jobs?

Zhao: We’re actually putting out a campaign about this, in the coming weeks or months. We want to push out a more amplifying, positive message about what Notion can do for you. So, imagine the billboard we’re putting out. It’s you in the center. Then, with a tool like Notion or other AI tools, you can have AI teammates. Imagine that you and I start a company. We’re two co-founders, we sign up for Notion, and all of a sudden, we’re supplemented by other AI teammates, some taking notes for us, some triaging, some doing research while we’re sleeping.

Zhao dodges the “hire fewer people” part of the question and instead, answers with “amplifying” people or making them more productive.

preview-1755062355751.jpg

Notion CEO Ivan Zhao wants you to demand better from your tools

Notion’s Ivan Zhao on AI agents, productivity, and how software will change in the future.

theverge.com icontheverge.com

Yesterday, OpenAI launched GPT-5, their latest and greatest model that replaces the confusing assortment of GPT-4o, o3, o4-mini, etc. with just two options: GPT-5 and GPT-5 pro. The reasoning is built in and the new model is smart enough to know what to think harder, or when a quick answer suffices.

Simon Willison deep dives into GPT-5, exploring its mix of speed and deep reasoning, massive context limits, and competitive pricing. He sees it as a steady, reliable default for everyday work rather than a radical leap forward:

I’ve mainly explored full GPT-5. My verdict: it’s just good at stuff. It doesn’t feel like a dramatic leap ahead from other LLMs but it exudes competence—it rarely messes up, and frequently impresses me. I’ve found it to be a very sensible default for everything that I want to do. At no point have I found myself wanting to re-run a prompt against a different model to try and get a better result.

It’s a long technical read but interesting nonetheless.

preview-1754630277862.jpg

GPT-5: Key characteristics, pricing and model card

I’ve had preview access to the new GPT-5 model family for the past two weeks (see related video) and have been using GPT-5 as my daily-driver. It’s my new favorite …

simonwillison.net iconsimonwillison.net

Jay Hoffman, writing in his excellent The History of the Web website, reflects on Kevin Kelly’s 2005 Wired piece that celebrated the explosive growth of blogging—50 million blogs, one created every two seconds—and predicted a future powered by open participation and user-created content. Kelly was right about the power of audiences becoming creators, but he missed the crucial detail: 2005 would mark the peak of that open web participation before everyone moved into centralized platforms.

There are still a lot of blogs, 600 million by some accounts. But they have been supplanted over the years by social media networks. Commerce on the web has consolidated among fewer and fewer sites. Open source continues to be a major backbone to web technologies, but it is underfunded and powered almost entirely by the generosity of its contributors. Open API’s barely exist. Forums and comment sections are finding it harder and harder to beat back the spam. Users still participate in the web each and every day, but it increasingly feels like they do so in spite of the largest web platforms and sites, not because of them.

My blog—this website—is a direct response to the consolidation. This site and its content are owned and operated by me and not stuck behind a login or paywall to be monetized by Meta, Medium, Substack, or Elon Musk. That is the open web.

Hoffman goes on to say, “The web was created for participation, by its nature and by its design. It can’t be bottled up long.” He concludes with:

Independent journalists who create unique and authentic connections with their readers are now possible. Open social protocols that experts truly struggle to understand, is being powered by a community that talks to each other.

The web is just people. Lots of people, connected across global networks. In 2005, it was the audience that made the web. In 2025, it will be the audience again.

preview-1754534872678.jpg

We Are Still the Web

Twenty years ago, Kevin Kelly wrote an absolutely seminal piece for Wired. This week is a great opportunity to look back at it.

thehistoryoftheweb.com iconthehistoryoftheweb.com

For the past year, CPG behemoth Unilever has been “working with marketing services group Brandtech to build up its Beauty AI Studio: a bespoke, in-house system inside its beauty and wellbeing business. Now in place across 18 different markets (the U.S. and U.K. among them), the studio is being used to make assets for paid social, programmatic display inventory and e-commerce usage across brands including Dove Intensive Repair, TRESemme Lamellar Shine and Vaseline Gluta Hya.”

Sam Bradley, writing in Digiday:

The system relies on Pencil Pro, a generative AI application developed by Brandtech Group. The tool draws on several large language models (LLMs), as well as API access to Meta and TikTok for effectiveness measurement. It’s already used by hearing-care brand Amplifon to rapidly produce text and image assets for digital ad channels.

In Unilever’s process, marketers use prompts and their own insights about target audiences to generate images and video based on 3D renders of each product, a practice sometimes referred to as “digital twinning.” Each brand in a given market is assigned a “BrandDNAi” — an AI tool that can retrieve information about brand guidelines and relevant regulations and that provides further limitations to the generative process.

So far, they haven’t used this system to generate AI humans. Yet.

Inside Unilever’s AI beauty marketing assembly line — and its implications for agencies

The CPG giant has created an AI-augmented in-house production system. Could it be a template for others looking to bring AI in house?

digiday.com icondigiday.com

In many ways, this excellent article by Kaustubh Saini for Final Round AI’s blog is a cousin to my essay on the design talent crisis. But it’s about what happens when people “become” developers and only know vibe coding.

The appeal is obvious, especially for newcomers facing a brutal job market. Why spend years learning complex programming languages when you can just describe what you want in plain English? The promise sounds amazing: no technical knowledge required, just explain your vision and watch the AI build it.

In other words, these folks don’t understand the code and, well, bad things can happen.

The most documented failure involves an indie developer who built a SaaS product entirely through vibe coding. Initially celebrating on social media that his “saas was built with Cursor, zero hand written code,” the story quickly turned dark.

Within weeks, disaster struck. The developer reported that “random things are happening, maxed out usage on api keys, people bypassing the subscription, creating random shit on db.” Being non-technical, he couldn’t debug the security breaches or understand what was going wrong. The application was eventually shut down permanently after he admitted “Cursor keeps breaking other parts of the code.”

This failure illustrates the core problem with vibe coding: it produces developers who can generate code but can’t understand, debug, or maintain it. When AI-generated code breaks, these developers are helpless.

I don’t foresee something this disastrous with design. I mean, a newbie designer wielding an AI-enabled Canva or Figma can’t tank a business alone because the client will have eyes on it and won’t let through something that doesn’t work. It could be a design atrocity, but it’ll likely be fine.

This *can *happen to a designer using vibe coding tools, however. Full disclosure: I’m one of them. This site is partially vibe-coded. My Severance fan project is entirely vibe-coded.

But back to the idea of a talent crisis. In the developer world, it’s already happening:

The fundamental problem is that vibe coding creates what experts call “pseudo-developers.” These are people who can generate code but can’t understand, debug, or maintain it. When AI-generated code breaks, these developers are helpless.

In other words, they don’t have the skills necessary to be developers because they can’t do the basics. They can’t debug, don’t understand architecture, have no code review skills, and basically have no fundamental knowledge of what it means to be a programmer. “They miss the foundation that allows developers to adapt to new technologies, understand trade-offs, and make architectural decisions.”

Again, assuming our junior designers have the requisite fundamental design skills, not having spent time developing their craft and strategic skills through experience will be detrimental to them and any org that hires them.

preview-1753377392986.jpg

How AI Vibe Coding Is Destroying Junior Developers' Careers

New research shows developers think AI makes them 20% faster but are actually 19% slower. Vibe coding is creating unemployable pseudo-developers who can't debug or maintain code.

finalroundai.com iconfinalroundai.com

Sonos announced yesterday that interim CEO Tom Conrad was made permanent. From their press release:

Sonos has achieved notable progress under Mr. Conrad’s leadership as Interim CEO. This includes setting a new standard for the quality of Sonos’ software and product experience, clearing the path for a robust new product pipeline, and launching innovative new software enhancements to flagship products Sonos Ace and Arc Ultra.

Conrad surely navigated this landmine well after the disastrous app redesign that wiped almost $500 million from the company’s market value and cost CEO Patrick Spence his job. My sincere hope is that Conrad continues to rebuild Sonos’s reputation by continuing to improve their products.

Sonos Appoints Tom Conrad as Chief Executive Officer

Sonos Website

sonos.com iconsonos.com
Retro-style robot standing at a large control panel filled with buttons, switches, and monitors displaying futuristic data.

The Era of the AI Browser Is Here

For nearly three years, Arc from The Browser Company has been my daily driver. To be sure, there was a little bit of a learning curve. Tabs disappeared after a day unless you pinned them. Then they became almost like bookmarks. Tabs were on the left side of the window, not at the top. Spaces let me organize my tabs based on use cases like personal, work, or finances. I could switch between tabs using control-Tab and saw little thumbnails of the pages, similar to the app switcher on my Mac. Shift-command-C copied the current page’s URL. 

All these little interface ideas added up to a productivity machine for web jockeys like myself. And so, I was saddened to hear in May that The Browser Company stopped actively developing Arc in favor of a new AI-powered browser called Dia. (They are keeping Arc updated with maintenance releases.)

They had started beta-testing Dia with college students first and just recently opened it up to Arc members. I finally got access to Dia a few weeks ago. 

But before diving into Dia, I should mention I also got access to another AI browser, Perplexity’s Comet about a week ago. I’m on their Pro plan but somehow got an invite in my email. I had thought it was limited to those on their much more expensive Max plan only. Shhh.

So this post is about both and how the future of web browsing is obviously AI-assisted, because it feels so natural.

Chat With Your Tabs

Landing page for Dia, a browser tool by The Browser Company, showcasing the tagline “Write with your tabs” and a button for early access download, along with a UI mockup for combining tabs into a writing prompt.

To be honest, I used Dia in fits and starts. It was easy to import my profiles from Arc and have all my bookmarks transferred over. But, I was missing all the pro-level UI niceties that Arc had. Tabs were back at the top and acted like tabs (though they just brought back sidebar tabs in the last week). There were no spaces. I felt like it was 2021 all over again. I tried to stick with it for a week. 

What Dia offers that Arc does not is, of course, a way to “chat” with your tabs. It’s a chat sidebar to the right of the web page that has the context of that page you’re on. You can also add additional tabs to the chat context by simply @mentioning them.

In a recent article about Dia in The New York Times, reporter Brian X. Chen describes using it to summarize a 22-minute YouTube video about car jump starters, instantly surfacing the top products without watching the whole thing. This is a vivid illustration of the “chat with your tabs” value prop. Saving time.

I’ve been doing the same thing. Asking the chat to summarize a page for me or explain some technical documentation to me in plain English. Or I use it as a fuzzy search to find a quote from the page that mentions something specific. For example, if I’m reading an interview with the CEO of Perplexity and I want to know if he’s tried the Dia browser yet, I can ask, “Has he used Dia yet?” Instead of reading through the whole thing. 

Screenshot of the Dia browser displaying a Verge article about Perplexity’s CEO, with an AI-generated sidebar summary clarifying that Aravind Srinivas has not used Dia.

Screenshot of the Dia browser displaying a Verge article about Perplexity’s CEO, with an AI-generated sidebar summary clarifying that Aravind Srinivas has not used Dia.

Another use case is to open a few tabs and ask for advice. For example, I can open up a few shirts from an e-commerce store and ask for a recommendation.

Screenshot of the Dia browser comparing shirts on the Bonobos website, with multiple tabs open for different shirt styles. The sidebar displays AI-generated advice recommending the Everyday Oxford Shirt for a smart casual look, highlighting its versatility, fit options, and stretch comfort.

Using Dia to compare shirts and get a smart casual recommendation from the AI.

Dia also has customizable “skills” which are essentially pre-saved prompts. I made one to craft summary bios from LinkedIn profiles.

Screenshot of the Dia browser on Josh Miller’s LinkedIn profile, with the “skills” feature generating a summarized biography highlighting his role as CEO of The Browser Company and his career background.

Using Dia’s skills feature to generate a summarized biography from a LinkedIn profile.

It’s cool. But I found that it’s a little limited because the chat is usually just with the tabs that you feed Dia. It helps you digest and process information. In other words, it’s an incremental step up from ChatGPT.

Enter Comet.

Browsing Done for You

Landing page for Comet, an AI-powered browser by Perplexity, featuring the tagline “Browse at the speed of thought” with a prominent “Get Comet” download button.

Comet by Perplexity also allows you to chat with your tabs. Asking about that Verge interview, I received a very similar answer. (No, Aravind Srinivas has not used Dia yet.) And because Perplexity search is integrated into Comet, I find that it is much better at context-setting and answering questions than Dia. But that’s not Comet’s killer feature.

Screenshot of the Comet browser displaying a Verge article about Perplexity’s CEO, with the built-in AI assistant on the right confirming Aravind Srinivas has not used the Dia browser.

Viewing the same article in Comet, with its AI assistant answering questions about the content.

Instead, it’s doing stuff with your tabs. Comet’s onboarding experience shows a few use cases like replying to emails and setting meetings, or filling an Instacart cart with the ingredients for butter chicken.

Just like Dia, when I first launched Comet, I was able to import my profiles from Arc, which included bookmarks and cookies. I was essentially still logged into all the apps and sites I was already logged into. So I tried an assistant experiment. 

One thing I often do is to look up the restaurants that have availability on OpenTable in Yelp. I tend to agree more with Yelpers who are usually harsher critics than OpenTable diners. So I asked Comet to “Find me the highest rated sushi restaurants in San Diego that have availability for 2 at 7pm next Friday night on OpenTable. Pick the top 10 and then rank them by Yelp rating.” And it worked! And if I really want to, I can say “Book Takaramono sushi” and it would have done so. (Actually, I did and then quickly canceled.)

The Comet assistant helped me find a sushi restaurant reservation. Video is sped up 4x.

I tried a different experiment which is something I heard Aravind Srinivas say in his interview with The Verge. I navigated to Gmail and checked three emails I wanted to unsubscribe to. I asked the assistant, “unsubscribe from the checked emails.” The agent then essentially took over my Gmail screen and opened the first checked email, clicked on the unsubscribe link. It repeated this process for the other two emails though ran into a couple of snags. First, Gmail doesn’t keep the state of the checked emails when you click into an email. But the Comet assistant was smart enough to remember the subject lines of all three emails. For the second email, it had some issues filing out the right email for the form so it didn’t work. Therefore of the three unsubscribes, it succeeded on two. 

The whole process also took about two minutes. It was wild though to see my Gmail being navigated by the machine. So that you know it’s in control, Comet puts a teal glow around the edges of the page, not dissimilar to the purple glow of the new Siri. And I could have stopped Comet at any time by clicking a stop button. Obviously, sitting there for two minutes and watching my computer unsubscribe to three emails is a lot longer than the 20 seconds it would have take me to do this manually, but like with many agents, the thinking is to delegate a process to it and come back later to check it. 

I Want My AI Browser

A couple hours after Perplexity launched Comet, Reuters published a leak with the headline “Exclusive: OpenAI to release web browser in challenge to Google Chrome.” Perplexity’s CEO seems to suggest that it was on purpose, to take a bit of wind from their sails. The Justice Department is still trying to strong-arm Google to divest itself from Chrome. If that happens, we’re talking about breaking the most profitable feedback loop in tech history. Chrome funnels search queries directly to Google, which powers their ad empire, which funds Chrome development. Break that cycle, and suddenly you’ve got independent Chrome that could default to any search engine, giving AI-first challengers like The Browser Company, Perplexity, and OpenAI a real shot at users.

Regardless of Chrome’s fate, I strongly believe that AI-enabled browsers are the future. Once I started chatting with my tabs, asking for summaries, seeking clarification, asking for too-technical content to be dumbed down to my level, I just can’t go back. The agentic stuff that Perplexity’s Comet is at the forefront of is just the beginning. It’s not perfect yet, but I think its utility will get there as the models get better. To quote Srinivas again:

I’m betting on the fact that in the right environment of a browser with access to all these tabs and tools, a sufficiently good reasoning model — like slightly better, maybe GPT-5, maybe like Claude 4.5, I don’t know — could get us over the edge where all these things are suddenly possible and then a recruiter’s work worth one week is just one prompt: sourcing and reach outs. And then you’ve got to do state tracking… That’s the extent to which we have an ambition to make the browser into something that feels more like an OS where these are processes that are running all the time.

It must be said that both Opera and Microsoft’s Edge also have AI built in. However, the way those features are integrated feel more like afterthoughts, the same way that Arc’s own AI features felt like tiny improvements.

The AI-powered ideas in both Dia and Comet are a step change. But the basics also have to be there, and in my opinion, should be better than what Chrome offers. The interface innovations that made Arc special shouldn’t be sacrificed for AI features. Arc is/was the perfect foundation. Integrate an AI assistant that can be personalized to care about the same things you do so its summaries are relevant. The assistant can be agentic and perform tasks for you in the background while you focus on more important things. In other words, put Arc, Dia, and Comet in a blender and that could be the perfect browser of the future.

From UX Magazine:

Copilots helped enterprises dip their toes into AI. But orchestration platforms and tools are where the real transformation begins — systems that can understand intent, break it down, distribute it, and deliver results with minimal hand-holding.

Think of orchestration as how “meta-agents” are conducting other agents.

The first iteration of AI in SaaS was copilots. They were like helpful interns eagerly awaiting your next command. Orchestration platforms are more like project managers. They break down big goals into smaller tasks, assign them to the right AI agents, and keep everything coordinated. This shift is changing how companies design software and user experiences, making things more seamless and less reliant on constant human input.

For designers and product teams, it means thinking about workflows that cross multiple tools, making sure users can trust and control what the AI is doing, and starting small with automation before scaling up.

Beyond Copilots: The Rise of the AI Agent Orchestration Platform

AI agent orchestration platforms are replacing simple copilots, enabling enterprises to coordinate autonomous agents for smarter, more scalable workflows.

uxmag.com iconuxmag.com

In case you missed it, there’s been a major shift in the AI tool landscape.

On Friday, OpenAI’s $3 billion offer to acquire AI coding tool Windsurf expired. Windsurf is the Pepsi to Cursor’s Coke. They’re both IDEs, the programming desktop application that software developers use to code. Think of them as supercharged text editors but with AI built in.

On Friday evening, Google announced that it had hired Windsurf’s CEO Varun Mohan, co-founder Douglas Chen, and several key researchers for $2.4 billion.

On Monday, Cognition, the company behind Devin, the self-described “AI engineer” announced that it had acquired Windsurf for an undisclosed sum, but noting that its remaining 250 employees will “participate financially in this deal.”

Why does this matter to designers?

The AI tools market is changing very rapidly. With AI helping to write these applications, their numbers and features are always increasing—or in this case, maybe consolidating. Choose wisely before investing too deeply into one particular tool. The one piece of advice I would give here is to avoid lock-in. Don’t get tied to a vendor. Ensure that your tool of choice can export your work—the code.

Jason Lemkin has more on the business side of things and how it affects VC-backed startups.

preview-1752536770924.png

Did Windsurf Sell Too Cheap? The Wild 72-Hour Saga and AI Coding Valuations

The last 72 hours in AI coding have been nothing short of extraordinary. What started as a potential $3 billion OpenAI acquisition of Windsurf ended with Google poaching Windsurf’s CEO and co…

saastr.com iconsaastr.com

This post has been swimming in my head since I read it. Elena Verna, who joined Lovable just over a month ago to lead marketing and growth, writing in her newsletter, observes that everyone at the company is an AI-native employee. “An AI-native employee isn’t someone who ‘uses AI.’ It’s someone who defaults to AI,” she says.

On how they ship product:

Here, when someone wants to build something (anything) - from internal tools, to marketing pages, to writing production code - they turn to AI and… build it. That’s it.

No headcount asks. No project briefs. No handoffs. Just action.

At Lovable, we’re mostly building with… Lovable. Our Shipped site is built on Lovable. I’m wrapping hackathon sponsorship intake form in Lovable as we speak. Internal tools like credit giveaways and influencer management? Also Lovable (soon to be shared in our community projects so ya’ll can remix them too). On top of that, engineering is using AI extensively to ship code fast (we don’t even really have Product Managers, so our engineers act as them).

I’ve been hearing about more and more companies operating this way. Crazy time to be alive.

More on this topic in a future long-form post.

preview-1752160625907.png

The rise of the AI-native employee

Managers without vertical expertise, this is your extinction call

elenaverna.com iconelenaverna.com

Paul Worthington writing about the recent Cannes Festival of Creativity:

…nostalgia is rapidly becoming a major idea d’jour among marketers targeting that oh-so-desirable “Gen Z” demographic.

As a result, it should come as no surprise that if you were to walk around Cannes over the past month or so, you’d be forgiven for thinking brands no longer had any interest in the future: Lisa Frank notebooks. Tamagotchi cameos. Taglines from 1999. Brand after brand strapping itself to the past, seeking refuge in comfort. Instacart. Mattel. Burger King. Skoda. All treating relevance as if it were a rerun.

But along with nostalgia, another theme was present at Cannes—differentiation:

Cannes was also a parade of brands betting on something riskier. Something sharper. Something new. Liquid Death. Stripe. Tesla. Anduril. Companies building out from belief systems focused resolutely on what makes them unique. Making things you couldn’t have predicted because they weren’t remixes of the past—they were statements of the future.

Worthington argues that these two themes are diametrically opposed. Nostalgia brands are “fundamentally risk-averse” and feel safe. While differentiated brands are “risk-embracing,” betting that consumers are desperate for “something weird, sharp, and built from scratch.”

preview-1751949966152.png

Nostalgia Vs Differentiation

Beware winning today and losing the future.

offkilter.substack.com iconoffkilter.substack.com
Stylized artwork showing three figures in profile - two humans and a metallic robot skull - connected by a red laser line against a purple cosmic background with Earth below.

Beyond Provocative: How One AI Company’s Ad Campaign Betrays Humanity

I was in London last week with my family and spotted this ad in a Tube car. With the headline “Humans Were the Beta Test,” this is for Artisan, a San Francisco-based startup peddling AI-powered “digital workers.” Specifically an AI agent that will perform sales outreach to prospects, etc.

London Underground tube car advertisement showing "Humans Were the Beta Test" with subtitle "The Era of AI Employees Is Here" and Artisan company branding on a purple space-themed background.

Artisan ad as seen in London, June 2025

I’ve long left the Bay Area, but I know that the 101 highway is littered with cryptic billboards from tech companies, where the copy only makes sense to people in the tech industry, which to be fair, is a large part of the Bay Area economy. Artisan is infamous for its “Stop Hiring Humans” campaign which went up late last year. Being based in San Diego, much further south in California, I had no idea. Artisan wasn’t even on my radar.

Highway billboard reading "Stop Hiring Humans, Hire Ava, the AI BDR" with Artisan branding and an AI avatar image on the right side.

Artisan billboard off Highway 101, between San Francisco and SFO Airport

There’s something to be said about shockvertising. It’s meant to be shocking or offensive to grab attention. And the company sure increased their brand awareness, claiming a +197% increase in brand search growth. Artisan CEO Jaspar Carmichael-Jack writes a post-mortem in their company blog about the campaign:

The impact exceeded our wildest expectations. When I meet new people in San Francisco, 70% of the time they know about Artisan and what we do. Before, that number was around 5%. aHrefs ranked us #2 fastest growing AI companies by brand search. We’ve seen 1000s of sales meetings getting booked.

According to him, “October and November became our biggest months ever, bringing in over $2M in new ARR.”

I don’t know how I feel about this. My initial reaction to seeing “Humans Were the Beta Test” in London was disgust. As my readers know, I’m very much pro-AI, but I’m also very pro-human. Calling humanity a beta test is simply tone-deaf and nihilistic. It is belittling our worth and betting on the end of our species. Yes, yes, I know it’s just advertising and some ads are simply offensive to various people for a variety of reasons. But as technology people, Artisan should know better.

Despite ChatGPT’s soaring popularity, there is still ample fear about AI, especially around job displacement and safety. The discourse around AI is already too hyped up.

I even think “Stop Hiring Humans” is slightly less offensive. As to why the company chose to create a rage-bait campaign, Carmichael-Jack says:

We knew that if we made the billboards as vanilla as everybody else’s, nobody would care. We’d spend $100s of thousands and get nothing in return.

We spent days brainstorming the campaign messaging. We wanted to draw eyes and spark interest, we wanted to cause intrigue with our target market while driving a bit of rage with the wider public. The messaging we came up with was simple but provocative: “Stop Hiring Humans.”

Bus stop advertisement displaying "Stop Hiring Humans" with "The Era of AI Employees Is Here" and three human faces, branded by Artisan, on a city street with a passing bus.RetryClaude can make mistakes. Please double-check responses.

When the full campaign which included 50 bus shelter posters went up, death threats started pouring in. He was in Miami on business and thought going home to San Francisco might be risky. “I was like, I’m not going back to SF,” Carmichael-Jack says in a quote to The San Francisco Standard. “I will get murdered if I go back.”

(For the record, I’m morally opposed to death threats. They’re cowardly and incredibly scary for the recipient, regardless of who that person is.)

I’ve done plenty of B2B advertising campaigns in my day. Shock is not a tactic I would have used, nor would I ever recommend to a brand trying to raise positive awareness. I wish Artisan would have used the services of a good B2B ad agency. There are plenty out there and I used to work at one.

Think about the brands that have used shockvertising tactics in the past like Benetton and Calvin Klein. I’ve liked Oliviero Toscani’s controversial photographs that have been the central part of Benetton’s campaigns because they instigate a positive *liberal *conversation. The Pope kissing Egypt’s Islamic leader invites dialogue about religious differences and coexistence and provocatively expresses the campaign concept of “Unhate.”

But Calvin Klein’s sexualized high schoolers? No. There’s no good message in that.

And for me, there’s no good message in promoting the death of the human race. After all, who will pay for the service after we’re all end-of-lifed?

Here we go. Figma has just dropped their S-1, or their registration for an initial public offering (IPO).

A financial metrics slide showing Figma's key performance indicators on a dark green background. The metrics displayed are: $821M LTM revenue, 46% YoY revenue growth, 18% non-GAAP operating margin, 91% gross margin, 132% net dollar retention, 78% of Forbes 2000 companies use Figma, and 76% of customers use 2 or more products.

Rollup of stats from Figma’s S-1.

While a lot of the risk factors are boilerplate—legalese to cover their bases—the one about AI is particularly interesting, “Competitive developments in AI and our inability to effectively respond to such developments could adversely affect our business, operating results, and financial condition.”

Developments in AI are already impacting the software industry significantly, and we expect this impact to be even greater in the future. AI has become more prevalent in the markets in which we operate and may result in significant changes in the demand for our platform, including, but not limited to, reducing the difficulty and cost for competitors to build and launch competitive products, altering how consumers and businesses interact with websites and apps and consume content in ways that may result in a reduction in the overall value of interface design, or by otherwise making aspects of our platform obsolete or decreasing the number of designers, developers, and other collaborators that utilize our platform. Any of these changes could, in turn, lead to a loss of revenue and adversely impact our business, operating results, and financial condition.

There’s a lot of uncertainty they’re highlighting:

  • Could competitors use AI to build competing products?
  • Could AI reduce the need for websites and apps which decreases the need for interfaces?
  • Could companies reduce workforces, thus reducing the number of seats they buy?

These are all questions the greater tech industry is asking.

preview-1751405229235.png

Figma Files Registration Statement for Proposed IPO | Figma Blog

An update on Figma's path to becoming a publicly traded company: our S-1 is now public.

figma.com iconfigma.com
Page 1 of 3