Skip to content

83 posts tagged with “technology industry”

Is the AI bubble about to burst? Apparently, AI prompt-to-code tools like Lovable and v0 have peaked and are on their way down.

Alistair Barr writing for Business Insider:

The drop-off raises tough questions for startups that flaunted exponential annual recurring revenue growth just months ago. Analysts wrote that much of that revenue comes from month-to-month subscribers who may churn as quickly as they signed up, putting the durability of those flashy numbers in doubt.

Barr interviewed Eric Simons, CEO of Bolt who said:

“This is the problem across all these companies right now. The churn rate for everyone is really high,” Simons said. “You have to build a retentive business.”

AI vibe coding tools were supposed to change everything. Now traffic is crashing.

AI vibe coding tools were supposed to change everything. Now traffic is crashing.

Vibe coding tools have seen traffic drop, with Vercel’s v0 and Lovable seeing significant declines, raising sustainability questions, Barclays warns.

businessinsider.com iconbusinessinsider.com

In an announcement to users this morning, Visual Electric said they were being acquired by Perplexity—or more accurately, the team that makes Visual Electric will be hired by Perplexity. The service will shut down in the next 90 days.

Today we’re sharing the next step in Visual Electric’s journey: we’ve been acquired by Perplexity. This is a milestone that marks both an exciting opportunity for our team and some big changes for our product.

Over the next 90 days we’ll be sunsetting Visual Electric, and our team will be forming a new Agent Experiences group at Perplexity.

While we’ve seen acquihires and shutdowns in either the AI infrastructure space (e.g., Scale AI) or coding space (e.g., Windsurf), I don’t believe we’ve seen one in the image or video gen AI space have an exit event like this yet. Obviously, The Browser Company announced their acquisition by Atlassian last month.

I believe building gen AI tools at this moment is incredibly competitive. I think it takes an even stronger stomached entrepreneur than in the pre-ChatGPT moment. So kudos for the folks at Visual Electric for having a good outcome and getting to continue to do their work at Perplexity. But I do think this is not the last that we’ll see consolidation in this space.

preview-1759365436869-1200x630.png

Visual Electric is Joining Perplexity

Today we’re sharing the next step in Visual Electric’s journey: we’ve been acquired by Perplexity. This is a milestone that marks both an exciting opportunity for our team and some big changes for our product.

visualelectric.com iconvisualelectric.com

Tim Berners-Lee, the father of the web who gave away the technology for free, says that we are at an inflection point with data privacy and AI. But before he makes that point, he reminds us that we are the product:

Today, I look at my invention and I am forced to ask: is the web still free today? No, not all of it. We see a handful of large platforms harvesting users’ private data to share with commercial brokers or even repressive governments. We see ubiquitous algorithms that are addictive by design and damaging to our teenagers’ mental health. Trading personal data for use certainly does not fit with my vision for a free web.

On many platforms, we are no longer the customers, but instead have become the product. Our data, even if anonymised, is sold on to actors we never intended it to reach, who can then target us with content and advertising. This includes deliberately harmful content that leads to real-world violence, spreads misinformation, wreaks havoc on our psychological wellbeing and seeks to undermine social cohesion.

And about that fork in the road with AI:

In 2017, I wrote a thought experiment about an AI that works for you. I called it Charlie. Charlie works for you like your doctor or your lawyer, bound by law, regulation and codes of conduct. Why can’t the same frameworks be adopted for AI? We have learned from social media that power rests with the monopolies who control and harvest personal data. We can’t let the same thing happen with AI.

preview-1759201284501.jpg

Why I gave the world wide web away for free

My vision was based on sharing, not exploitation – and here’s why it’s still worth fighting for

theguardian.com icontheguardian.com

In my most recent post, I called out our design profession, for our part in developing these addictive products. Jeffrey Inscho, brings it back up to the tech industry at large and observes they’re actually publishers:

The executives at these companies will tell you they’re neutral platforms, that they don’t choose what content gets seen. This is a lie. Every algorithmic recommendation is an editorial decision. When YouTube’s algorithm suggests increasingly extreme political content to keep someone watching, that’s editorial. When Facebook’s algorithm amplifies posts that generate angry reactions, that’s editorial. When Twitter’s trending algorithms surface conspiracy theories, that’s editorial.

They are publishers. They have always been publishers. They just don’t want the responsibility that comes with being publishers.

His point is that if these social media platforms are sorting and promoting posts, it’s an editorial approach and they should be treated like newspapers. “It’s like a newspaper publisher claiming they’re not responsible for what appears on their front page because they didn’t write the articles themselves.”

The answer, Inscho argues, is regulation of the algorithms.

Turn Off the Internet

Big tech has built machines designed for one thing: to hold …

staticmade.com iconstaticmade.com

When I read this, I thought to myself, “Geez, this is what a designer does.” I think there is a lot of overlap between what we do as product designers and what product managers do. One critical one—in my opinion, and why we’re calling ourselves product designers—is product sense. Product sense is the skill of finding real user needs and creating solutions that have impact.

So I think people can read this with two lenses:

  • If you’re a designer who executes the assignments you’re given, jumping into Figma right away, read this to be more well-rounded and understand the why of what you’re making.
  • If you’re a designer who spends 80% of your time questioning everything and defining the problem, and only 20% of your time in Figma, read this to see how much overlap you actually have with a PM.

BTW, if you’re in the first bucket, I highly encourage you to gain the skills necessary to migrate to the second bucket.

While designers often stay on top of visual design trends or the latest best practices from NNG, Jules Walter suggests an even wider aperture. Writing in Lenny’s Newsletter:

Another practice for developing creativity is to spend time learning about emerging trends in technology, society, and regulations. Changes in the industry create opportunities for launching new products that can address user needs in new ways. As a PM, you want to understand what’s possible in your domain in order to come up with creative solutions.

preview-1758776009017.png

How to develop product sense

Jules Walter shares a ton of actionable and practical advice to develop your product sense, explains what product sense is, how to know if you’re getting better,

lennysnewsletter.com iconlennysnewsletter.com
Dark red-toned artwork of a person staring into a glowing phone, surrounded by swirling shadows.

Blood in the Feed: Social Media’s Deadly Design

The assassination of Charlie Kirk on September 10, 2025, marked a horrifying inflection point in the growing debate over how digital platforms amplify rage and destabilize politics. As someone who had already stepped back from social media after Trump’s re-election, watching these events unfold from a distance only confirmed my decision. My feeds had become pits of despair, grievances, and overall negativity that didn’t do well for my mental health. While I understand the need to shine a light on the atrocities of Trump and his government, the constant barrage was too much. So I mostly opted out, save for the occasional promotion of my writing.

Kirk’s death feels like the inevitable conclusion of systems we’ve built—systems that reward outrage, amplify division, and transform human beings into content machines optimized for engagement at any cost.

The Mechanics of Disconnection

As it turns out, my behavior isn’t out of the ordinary. People quit social media for various reasons, often situational—seeking balance in an increasingly overwhelming digital landscape. As a participant explained in a research project about social media disconnection:

It was just a build-up of stress and also a huge urge to change things in life. Like, ‘It just can’t go on like this.’ And that made me change a number of things. So I started to do more sports and eat differently, have more social contacts and stop using online media. And instead of sitting behind my phone for two hours in the evening, I read a book and did some work, went to work out, I went to a birthday or a barbecue. I was much more engaged in other things. It just gave me energy. And then I thought, ‘This is good. That’s the way it’s supposed to be. I have to maintain this.’

Sometimes the realization is more visceral—that on these platforms, we are the product. As Jef van de Graaf provocatively puts it:

Every post we make, every friend we invited, every little notification dragging us back into the feed serves one purpose: to extract money from us—and give nothing back but dopamine addiction and mental illness.

While his language is deliberately inflammatory, the sentiment resonates with many who’ve watched their relationship with these platforms sour. As he cautions:

Remember: social media exists because we feed it our lives. We trade our privacy and sanity so VCs and founders can get rich and live like greedy fucking kings.

The Architecture of Rage

The internet was built to connect people and ideas. Even the early iterations of Facebook and Twitter were relatively harmless because the timelines were chronological. But then the makers—product managers, designers, and engineers—of social media platforms began to optimize for engagement and visit duration. Was the birth of the social media algorithm the original sin?

Kevin Roose and Casey Newton explored this question in their Hard Fork episode following Kirk’s assassination, discussing how platforms have evolved to optimize for what they call “borderline content”—material that comes right up to the line of breaking a platform’s policy without quite going over. As Newton observed about Kirk himself:

He excelled at making what some of the platform nerds that I write about would call borderline content. So basically, saying things that come right up to the line of breaking a platform’s policy without quite going over… It turns out that the most compelling thing you can do on social media is to almost break a policy.

Kirk mastered this technique—speculating that vaccines killed millions, calling the Civil Rights Act a mistake, flirting with anti-Semitic tropes while maintaining plausible deniability. He understood the algorithm’s hunger for controversy, and fed it relentlessly. And then, in a horrible irony, he was killed by someone who had likely been radicalized by the very same algorithmic forces he’d helped unleash.

As Roose reflected:

We as a culture are optimizing for rage now. You see it on the social platforms. You see it from politicians calling for revenge for the assassination of Charlie Kirk. You even see it in these individual cases of people getting extremely mad at the person who made a joke about Charlie Kirk that was edgy and tasteless, and going to report them to their employer and get them fired. It’s all this sort of spectacle of rage, this culture of destroying and owning and humiliating.

The Unraveling of Digital Society

Social media and smartphones have fundamentally altered how we communicate and socialize, often at the expense of face-to-face interactions. These technologies have created a market for attention that fuels fear, anger, and political conflict. The research on mental health impacts is sobering: studies found that the introduction of Facebook to college campuses led to measurable increases in depression, accounting for approximately 24 percent of the increased prevalence of severe depression among college students over two decades.

In the wake of Kirk’s assassination, what struck me most was how the platforms immediately transformed tragedy into content. Within hours, there were viral posts celebrating his death, counter-posts condemning those celebrations, organizations collecting databases of “offensive” comments, people losing their jobs, death threats flying in all directions. As Newton noted:

This kind of surveillance and doxxing is essentially a kind of video game that you can play on X. And people like to play video games. And because you’re playing with people’s real lives, it feels really edgy and cool and fun for those who are participating in this.

The human cost is remarkable—teachers, firefighters, military members fired or suspended for comments about Kirk’s death. Many received death threats. Far-right activists called for violence and revenge, doxxing anyone they accused of insufficient mourning.

Blood in the Feed

The last five years have been marked by eruptions of political violence that cannot be separated from the online world that incubated them.

  • The attack on Paul Pelosi (2022). The man who broke into the Speaker of the House Nancy Pelosi’s San Francisco home and fractured her husband’s skull had been marinating in QAnon conspiracies and election denialism online. Extremism experts warned it was a textbook case of how stochastic terrorism—the idea that widespread demonization online can trigger unpredictable acts of violence by individuals—travels from platform rhetoric into a hammer-swinging hand.
  • The Trump assassination attempt (July 2024). A young man opened fire at a rally in Pennsylvania. His social media presence was filled with antisemitic, anti-immigrant content. Within hours, extremist forums were glorifying him as a martyr and calling for more violence.
  • The killing of Minnesota legislator Melissa Hortman and her husband (June 2025). Their murderer left behind a manifesto echoing the language of online white supremacist and anti-abortion communities. He wasn’t a “lone wolf.” He was drawing from the same toxic well of white supremacist and anti-abortion rhetoric that floods online forums. The language of his manifesto wasn’t unique—it was copied, recycled, and amplified in the ideological swamps anyone with a Wi-Fi connection can wander into.

These headline events sit atop a broader wave: the New Orleans truck-and-shooting rampage inspired by ISIS propaganda online (January 2025), the Cybertruck bombing outside Trump’s Los Angeles hotel tied to accelerationist forums—online spaces where extremists argue that violence should be used to hasten the collapse of society (January 2025), and countless smaller assaults on election workers, minority communities, and public officials.

The pattern is depressingly clear. Platforms radicalize, amplify, and normalize the language of violence. Then, someone acts.

The Death of Authenticity

As social media became commoditized—a place to influence and promote consumption—it became less personal and more like TV. The platforms are now being overrun by AI spam and engagement-driven content that drowns out real human connection. As James O’Sullivan notes:

Platforms have little incentive to stem the tide. Synthetic accounts are cheap, tireless and lucrative because they never demand wages or unionize… Engagement is now about raw user attention – time spent, impressions, scroll velocity – and the net effect is an online world in which you are constantly being addressed but never truly spoken to.

Research confirms what users plainly see: tens of thousands of machine-written posts now flood public groups, pushing scams and chasing engagement. Whatever remains of genuine human content is increasingly sidelined by algorithmic prioritization, receiving fewer interactions than the engineered content and AI slop optimized solely for clicks.

The result? Networks that once promised a single interface for the whole of online life are splintering. Users drift toward smaller, slower, more private spaces—group chats, Discord servers, federated microblogs, and email newsletters. A billion little gardens replacing the monolithic, rage-filled public squares that have led to a burst of political violence.

The Designer’s Reckoning

This brings us to design and our role in creating these systems. As designers, are we beginning to reckon with what we’ve wrought?

Jony Ive, reflecting on his own role in creating the smartphone, acknowledges this burden:

I think when you’re innovating, of course, there will be unintended consequences. You hope that the majority will be pleasant surprises. Certain products that I’ve been very involved with, I think there were some unintended consequences that were far from pleasant. My issue is that even though there was no intention, I think there still needs to be responsibility. And that weighs on me heavily.

His words carry new weight after Kirk’s assassination—a death enabled by platforms we designed, algorithms we optimized, engagement metrics we celebrated.

At the recent World Design Congress in London, architect Indy Johar didn’t mince words:

We need ideas and practices that change how we, as humans, relate to the world… Ignoring the climate crisis means you’re an active operator in the genocide of the future.

But we might ask: What about ignoring the crisis of human connection? What about the genocide of civil discourse? Climate activist Tori Tsui’s warning applies equally to our digital architecture saying, “The rest of us are at the mercy of what you decide to do with your imagination.”

Political violence is accelerating and people are dying because of what we did with our imagination. If responsibility weighs heavily, so too must the search for alternatives.

The Possibility of Bridges

There are glimmers of hope in potential solutions. Aviv Ovadya’s concept of “bridging-based algorithms” offers one path forward—systems that actively seek consensus across divides rather than exploiting them. As Casey Newton explains:

They show them to people across the political spectrum… and they only show the note if people who are more on the left and more on the right agree. They see a bridge between the two of you and they think, well, if Republicans and Democrats both think this is true, this is likelier to be true.

But technological solutions alone won’t save us. The participants in social media disconnection studies often report developing better relationships with technology only after taking breaks. One participant explained:

It’s more the overload that I look at it every time, but it doesn’t really satisfy me, that it no longer had any value at a certain point in time. But that you still do it. So I made a conscious choice – a while back – to stop using Facebook.

Designing in the Shadow of Violence

Rob Alderson, in his dispatch from the World Design Congress, puts together a few pieces. Johar suggests design’s role is “desire manufacturing”—not just creating products, but rewiring society to want and expect different versions of the future. As COLLINS co-founder Leland Maschmeyer argued, design is about…

What do we want to do? What do we want to become? How do we get there?’… We need to make another reality as real as possible, inspired by new context and the potential that holds.

The challenge before us isn’t just technical—it’s fundamentally about values and vision. We need to move beyond the Post-it workshops and develop what Johar calls “new competencies” that shape the future.

As I write this, having stepped back from the daily assault of algorithmic rage, I find myself thinking about the Victorian innovators Ive mentioned—companies like Cadbury’s and Fry’s that didn’t just build factories but designed entire towns, understanding that their civic responsibility extended far beyond their products. They recognized that massive societal shifts of moving people from land that they farmed, to cities they lived in for industrial manufacturing, require holistic thinking about how people live and work together.

We stand at a similar inflection point. The tools we’ve created have reshaped human connection in ways that led directly to Charlie Kirk’s assassination. A young man, radicalized online, killed a figure who had mastered the art of online radicalization. The snake devoured its tail on a college campus in Utah, and we all watched it happen in real-time, transforming even this tragedy into content.

The vast majority of Americans, as Newton reminds us, “do not want to participate in a violent cultural war with people who disagree with them.” Yet our platforms are engineered to convince us otherwise, to make civil war feel perpetually imminent, to transform every disagreement into an existential threat.

The Cost of Our Imagination

Perhaps the real design challenge lies not in creating more engaging feeds or stickier platforms, but in designing systems that honor our humanity, foster genuine connection, and help us build the bridges we so desperately need.

Because while these US incidents show how social media incubates lone attackers and small cells, they pale in comparison to Myanmar, where Facebook’s algorithms directly amplified hate speech and incitement, contributing to the deaths of thousands—estimates range from 6,700 to as high as 24,000—and the forced displacement of over 700,000 Rohingya Muslims. That catastrophe made clear: when platforms optimize only for engagement, the result isn’t connection but carnage.

This is our design failure. We built systems that reward extremism, amplify rage, and treat human suffering as engagement. The tools meant to bring us together have instead armed us against each other. And we all bear responsibility for that.

It’s time we imagined something better—before the systems we’ve created finish the job of tearing us apart.

Still from a video shown at Apple Keynote 2025. Split screen of AirPods Pro connection indicator on left, close-up of earbuds in charging case on right.

Notes About the September 2025 Apple Event

Today’s Apple keynote opened with a classic quote from Steve Jobs.

Steve Jobs quote at Apple Keynote 2025 – Black keynote slide with white text: “Design is not just what it looks like and feels like. Design is how it works.” – Steve Jobs.

Then a video played, focused on the fundamental geometric shapes that can be found in Apple’s products: circles in the HomePod, iPhone shutter button, iPhone camera, MagSafe charging ring, Digital Crown on Apple Watch; rounded squares in the charging block, Home scene button, Mac mini, keycaps, Finder icon, FaceID; to the lozenges found in the AirPods case, MagSafe port, Liquid Glass carousel control, and the Action button on Apple Watch Ultra.

Then Tim Cook repeated the notion in his opening remarks:

At Apple, design has always been fundamental to who we are and what we do. For us, design goes beyond just how something looks or feels. Design is also how it works. This philosophy guides everything we do, including the products we’re going to introduce today and the experiences they provide.

Apple announced a bunch of products today, including:

  • AirPods Pro 3 with better active noise canceling, live translation, and heart rate sensing (more below)
  • Apple Watch Series 11, thinner and with hypertension alerts and sleep score
  • iPhone 17 with a faster chip and better camera (as always)
  • iPhone Air at 5.6 mm thin! They packed all the main components into a new full-width camera “plateau” (I guess that’s the new word for camera bump)
  • iPhone 17 Pro / Pro Max with a faster chip and even better camera (as always), along with unibody construction and cool vapor cooling (like liquid cooling, but with vapor), and a beefy camera plateau

Highlights

Live Translation is Star Trek’s Universal Translator

In the Star Trek universe, humans regularly speak English with aliens and the audience hears those same aliens reply in English. Of course, it’s television and it was always explained away—that a “universal translator” is embedded in the comm badge all Starfleet crew members wear.

Apple Keynote 2025 iPhone Live Translation feature – Woman holds up an iPhone displaying translated text, demonstrating Apple Intelligence with AirPods Pro 3.

With AirPods Pro 3, this is becoming real! In one demo video, Apple shows a woman at a market. She’s shopping and hears a vendor speak to her in Spanish. Through her AirPods, she hears the live translation and can reply in English and have that translated back to Spanish on her iPhone. Then, in another scene, two guys are talking—English and Italian—and they’re both wearing the new AirPods and having a seamless conversation. Amazing.

Apple Keynote 2025 AirPods Pro 3 Live Translation demo at café – Man wearing AirPods Pro 3 sits outdoors at a café table, smiling while testing real-time language translation.

Heart Rate Monitoring in AirPods

Apple is extending its fitness tracking features to AirPods, specifically the new AirPods Pro 3. These come with a new sensor that pulses invisible infrared light at 256 times per second to detect blood flow and calculate heart rate. I’m always astonished by how Apple keeps extending the capabilities of its devices to push health and fitness metrics, which—at least their thesis goes—helps with overall wellbeing. (See below.)

Full-Width Camera Bump

Or, the new camera plateau. I actually prefer the full width over just the bump. I feel like the plain camera bump on my iPhone 16 Pro makes the phone too wobbly when I put it on its back. I think a bump that spans the full width of the phone will make it more stable. This new design is on the new iPhone Air and iPhone 17 Pro.

To Air or Not to Air?

I’m on the iPhone Upgrade Program so I can get a new phone each year—and I have for that last few. I’m wondering if I want to get the Air this time. One thing I dislike about the iPhone Pros is their weight. The Pro is pretty heavy and I can feel it in my hand after prolonged use. At 165 grams, the Air is 17% lighter than the 16 Pro (199 grams). It might make a difference.

Overall Thoughts

Of course, in 2025, it’s a little striking that Apple didn’t mention much about AI. Apple framed AI not as a standalone product but as an invisible layer woven through AirPods, Watch, and iPhone—from Live Translation and Workout Buddy nudges to on-device models powering health insights and generative photo features. Instead of prompts and chatbots, Apple Intelligence showed up as contextual, ambient assistance designed to disappear into the flow of everyday use. And funnily enough, iOS 26 was mentioned in passing, as if Apple assumed everyone watching had seen the prior episode—er, keynote—in June.

It’s interesting that the keynote opened with that Steve Jobs quote about design. Maybe someone in Cupertino read my piece breaking down Liquid Glass where I argued:

People misinterpret this quote all the time to mean design is only how it works. That is not what Steve meant. He meant, design is both what it looks like and how it works.

(Actually, it was probably what Casey Newton wrote in Platformer about Liquid Glass.) 

If you step back and consider why Apple improves its hardware and software every year, it goes back to their implied mission: to make products that better human lives. This is exemplified by the “Dear Apple” spot they played as part of the segment on Apple Watch.

Play

Apple’s foray into wearables—beyond ear- and headphones—with Apple Watch ten years ago was really an entry into health technology. Lives have been saved and people have gotten healthier because Apple technology enabled them. Dr. Sumbul Ahmad Desai, VP of Health mentioned their new hypertension detection feature could notify over one million people with undiagnosed hypertension in its first year. Apple developed this feature using advanced machine learning, drawing on training data from multiple studies that involved over 100,000 participants. Then they clinically validated it in a separate study of over 2,000 participants. In other words, they’ve become a real force is shaping health tech.

And what also amazes me, is that now AirPods Pro 3 will help with health and fitness tracking. (See above.)

There’s no doubt that Apple’s formal design is always top-notch. But it’s great to be reminded of their why and how these must-buy-by-Christmas devices are capable of solving real world problems and bettering our lives. (And no, I don’t think having a lighter, thinner, faster, cooler phone falls into this category. We can have both moral purpose and commercial purpose.)

Brad Frost, of atomic design fame, wrote a history of themeable UIs as part of a deep dive into design tokens. He writes, “Design tokens may be the latest incarnation, but software creators have been creating themeable user interfaces for quite a long time!”

About Mario and Luigi from Super Mario Bros.:

It’s wild that two of the most iconic characters in the history of pop culture — red-clad Mario and green-clad Luigi — are themeable UI elements born from pragmatic ingenuity to overcome technological challenges. Freaking amazing.

The History of Themeable User Interfaces

The History of Themeable User Interfaces

A full-ish history of user interfaces that can be themed to meet the opportunities and constraints of the time

bradfrost.com iconbradfrost.com

Josh Miller, CEO, and Hursh Agrawal, CTO, of The Browser Company:

Today, The Browser Company of New York is entering into an agreement to be acquired by Atlassian in an all-cash transaction. We will operate independently, with Dia as our focus. Our objective is to bring Dia to the masses.

Super interesting acquisition here. There is zero overlap as far as I can tell. Atlassian’s move is out of left-field. Dia’s early users were college students. The Browser Company more recently opened it up to former Arc users. Is this bet for Atlassian—the company that makes tech-company-focused products like Jira and Confluence—around the future of work and collaboration? Is this their first move against Salesforce? 🤔

preview-1757007229906.jpeg

Your Tuesday in 2030

Or why The Browser Company is being acquired to bring Dia to the masses.

open.substack.com iconopen.substack.com
Conceptual 3D illustration of stacked digital notebooks with a pen on top, overlaid on colorful computer code patterns.

Why We Still Need a HyperCard for the AI Era

I rewatched the 1982 film TRON for the umpteenth time the other night with my wife. I have always credited this movie as the spark that got me interested in computers. Mind you, I was nine years old when this film came out. I was so excited after watching the movie that I got my father to buy us a home computer—the mighty Atari 400 (note sarcasm). I remember an educational game that came on cassette called “States & Capitals” that taught me, well, the states and their capitals. It also introduced me to BASIC, and after watching TRON, I wanted to write programs!

Vintage advertisement for the Atari 400 home computer, featuring the system with its membrane keyboard and bold headline “Introducing Atari 400.”

The Atari 400’s membrane keyboard was easy to wipe down, but terrible for typing. It also reminded me of fast food restaurant registers of the time.

Back in the early days of computing—the 1960s and ’70s—there was no distinction between users and programmers. Computer users wrote programs to do stuff for them. Hence the close relationship between the two that’s depicted in TRON. The programs in the digital world resembled their creators because they were extensions of them. Tron, the security program that Bruce Boxleitner’s character Alan Bradley wrote, looks like its creator. Clu looked like Kevin Flynn, played by Jeff Bridges. Early in the film, a compound interest program who was captured by the MCP’s goons says to a cellmate, “if I don’t have a User, then who wrote me?”

Scene from the 1982 movie TRON showing programs in glowing blue suits standing in a digital arena.

The programs in TRON looked like their users. Unless the user was the program, which was the case with Kevin Flynn (Jeff Bridges), third from left.

I was listening to a recent interview with Ivan Zhao, CEO and cofounder of Notion, in which he said he and his cofounder were “inspired by the early computing pioneers who in the ’60s and ’70s thought that computing should be more LEGO-like rather than like hard plastic.” Meaning computing should be malleable and configurable. He goes on to say, “That generation of thinkers and pioneers thought about computing kind of like reading and writing.” As in accessible and fundamental so all users can be programmers too.

The 1980s ushered in the personal computer era with the Apple IIe, Commodore 64, TRS-80, (maybe even the Atari 400 and 800), and then the Macintosh, etc. Programs were beginning to be mass-produced and consumed by users, not programmed by them. To be sure, this move made computers much more approachable. But it also meant that users lost a bit of control. They had to wait for Microsoft to add a feature into Word that they wanted.

Of course, we’re coming back to a full circle moment. In 2025, with AI-enabled vibecoding, users are able to spin up little custom apps that do pretty much anything they want them to do. It’s easy, but not trivial. The only interface is the chatbox, so your control is only as good as your prompts and the model’s understanding. And things can go awry pretty quickly if you’re not careful.

What we’re missing is something accessible, but controllable. Something with enough power to allow users to build a lot, but not so much that it requires high technical proficiency to produce something good. In 1987, Apple released HyperCard and shipped it for free with every new Mac. HyperCard, as fans declared at the time, was “programming for the rest of us.”

HyperCard—Programming for the Rest of Us

Black-and-white screenshot of HyperCard’s welcome screen on a classic Macintosh, showing icons for Tour, Help, Practice, New Features, Art Bits, Addresses, Phone Dialer, Graph Maker, QuickTime Tools, and AppleScript utilities.

HyperCard’s welcome screen showed some useful stacks to help the user get started.

Bill Atkinson was the programmer responsible for MacPaint. After the Mac launched, and apparently on an acid trip, Atkinson conceived of HyperCard. As he wrote on the Apple history site Folklore:

Inspired by a mind-expanding LSD journey in 1985, I designed the HyperCard authoring system that enabled non-programmers to make their own interactive media. HyperCard used a metaphor of stacks of cards containing graphics, text, buttons, and links that could take you to another card. The HyperTalk scripting language implemented by Dan Winkler was a gentle introduction to event-based programming.

There were five main concepts in HyperCard: cards, stacks, objects, HyperTalk, and hyperlinks. 

  • Cards were screens or pages. Remember that the Mac’s nine-inch monochrome screen was just 512 pixels by 342 pixels.
  • Stacks were collections of cards, essentially apps.
  • Objects were the UI and layout elements that included buttons, fields, and backgrounds.
  • HyperTalk was the scripting language that read like plain English.
  • Hyperlinks were links from one interactive element like a button to another card or stack.

When I say that HyperTalk read like plain English, I mean it really did. AppleScript and JavaScript are descendants. Here’s a sample logic script:

if the text of field "Password" is "open sesame" then
  go to card "Secret"
else
  answer "Wrong password."
end if

Armed with this kit of parts, users were able to use this programming “erector set” and build all sorts of banal or wonderful apps. From tracking vinyl records to issuing invoices, or transporting gamers to massive immersive worlds, HyperCard could do it all. The first version of the classic puzzle adventure game, Myst was created with HyperCard. It was comprised of six stacks and 1,355 cards. From Wikipedia:

The original HyperCard Macintosh version of Myst had each Age as a unique HyperCard stack. Navigation was handled by the internal button system and HyperTalk scripts, with image and QuickTime movie display passed off to various plugins; essentially, Myst functions as a series of separate multimedia slides linked together by commands.

Screenshot from the game Myst, showing a 3D-rendered island scene with a ship in a fountain and classical stone columns.

The hit game Myst was built in HyperCard.

For a while, HyperCard was everywhere. Teachers made lesson plans. Hobbyists made games. Artists made interactive stories. In the Eighties and early Nineties, there was a vibrant shareware community. Small independent developers who created and shared simple programs for a postcard, a beer, or five dollars. Thousands of HyperCard stacks were distributed on aggregated floppies and CD-ROMs. Steve Sande, writing in Rocket Yard:

At one point, there was a thriving cottage industry of commercial stack authors, and I was one of them. Heizer Software ran what was called the “Stack Exchange”, a place for stack authors to sell their wares. Like Apple with the current app stores, Heizer took a cut of each sale to run the store, but authors could make a pretty good living from the sale of popular stacks. The company sent out printed catalogs with descriptions and screenshots of each stack; you’d order through snail mail, then receive floppies (CDs at a later date) with the stack(s) on them.

Black-and-white screenshot of Heizer Software’s “Stack Exchange” HyperCard catalog, advertising a marketplace for stacks.

Heizer Software’s “Stack Exchange,” a marketplace for HyperCard authors.

From Stacks to Shrink-Wrap

But even as shareware tiny programs and stacks thrived, the ground beneath this cottage industry was beginning to shift. The computer industry—to move from niche to one in every household—professionalized and commoditized software development, distribution, and sales. By the 1990s, the dominant model was packaged software that was merchandised on store shelves in slick shrink-wrapped boxes. The packaging was always oversized for the floppy or CD it contained to maximize visual space.

Unlike the users/programmers from the ’60s and ’70s, you didn’t make your own word processor anymore, you bought Microsoft Word. You didn’t build your own paint and retouching program—you purchased Adobe Photoshop. These applications were powerful, polished, and designed for thousands and eventually millions of users. But that meant if you wanted a new feature, you had to wait for the next upgrade cycle—typically a couple of years. If you had an idea, you were constrained by what the developers at Microsoft or Adobe decided was on the roadmap.

The ethos of tinkering gave way to the economics of scale. Software became something you consumed rather than created.

From Shrink-Wrap to SaaS

The 2000s took that shift even further. Instead of floppy disks or CD-ROMs, software moved into the cloud. Gmail replaced the personal mail client. Google Docs replaced the need for a copy of Word on every hard drive. Salesforce, Slack, and Figma turned business software into subscription services you didn’t own, but rented month-to-month.

SaaS has been a massive leap for collaboration and accessibility. Suddenly your documents, projects, and conversations lived everywhere. No more worrying about hard drive crashes or lost phones! But it pulled users even farther away from HyperCard’s spirit. The stack you made was yours; the SaaS you use belongs to someone else’s servers. You can customize workflows, but you don’t own the software.

Why Modern Tools Fall Short

For what started out as a note-taking app, Notion has come a long way. With its kit of parts—pages, databases, tags, etc.—it’s highly configurable for tracking information. But you can’t make games with it. Nor can you really tell interactive stories (sure, you can link pages together). You also can’t distribute what you’ve created and share with the rest of the world. (Yes, you can create and sell Notion templates.)

No productivity software programs are malleable in the HyperCard sense. 

[IMAGE: Director]

Of course, there are specialized tools for creativity. Unreal Engine and Unity are great for making games. Director and Flash continued the tradition started by HyperCard—at least in the interactive media space—before they were supplanted by more complex HTML5, CSS, and JavaScript. Objectively, these authoring environments are more complex than HyperCard ever was.

The Web’s HyperCard DNA

In a fun remembrance, Constantine Frantzeskos writes:

HyperCard’s core idea was linking cards and information graphically. This was true hypertext before HTML. It’s no surprise that the first web pioneers drew direct inspiration from HyperCard – in fact, HyperCard influenced the creation of HTTP and the Web itself​. The idea of clicking a link to jump to another document? HyperCard had that in 1987 (albeit linking cards, not networked documents). The pointing finger cursor you see when hovering over a web link today? That was borrowed from HyperCard’s navigation cursor​.

Ted Nelson coined the terms “hypertext” and “hyperlink” in the mid-1960s, envisioning a world where digital documents could be linked together in nonlinear “trails”—making information interwoven and easily navigable. Bill Atkinson’s HyperCard was the first mass-market program that popularized this idea, even influencing Tim Berners-Lee, the father of the World Wide Web. Berners-Lee’s invention was about linking documents together on a server and linking to other documents on other servers. A web of documents.

Early ViolaWWW hypermedia browser from 1993, displaying a window with navigation buttons, URL bar, and hypertext description.

Early web browser from 1993, ViolaWWW, directly inspired by the concepts in HyperCard.

Pei-Yuan Wei, developer of one of the first web browsers called ViolaWWW, also drew direct inspiration from HyperCard. Matthew Lasar writing for Ars Technica:

“HyperCard was very compelling back then, you know graphically, this hyperlink thing,” Wei later recalled. “I got a HyperCard manual and looked at it and just basically took the concepts and implemented them in X-windows,” which is a visual component of UNIX. The resulting browser, Viola, included HyperCard-like components: bookmarks, a history feature, tables, graphics. And, like HyperCard, it could run programs.

And of course, with the built-in source code viewer, browsers brought on a new generation of tinkerers who’d look at HTML and make stuff by copying, tweaking, and experimenting.

The Missing Ingredient: Personal Software

Today, we have low-code and no code tools like Bubble for making web apps, Framer for building web sites, and Zapier for automations. The tools are still aimed at professionals though. Maybe with the exception of Zapier and IFTTT, they’ve expanded the number of people who can make software (including websites), but they’re not general purpose. These are all adjacent to what HyperCard was.

(Re)enter personal software.

In an essay titled “Personal software,” Lee Robinson wrote, “You wouldn’t search ‘best chrome extensions for note taking’. You would work with AI. In five minutes, you’d have something that works exactly how you want.”

Exploring the idea of “malleable software,” researchers at Ink & Switch wrote:

How can users tweak the existing tools they’ve installed, rather than just making new siloed applications? How can AI-generated tools compose with one another to build up larger workflows over shared data? And how can we let users take more direct, precise control over tweaking their software, without needing to resort to AI coding for even the tiniest change? None of these questions are addressed by products that generate a cloud-hosted application from a prompt.

Of course, AI prompt-to-code tools have been emerging this year, allowing anyone who can type to build web applications. However, if you study these tools more closely—Replit, Lovable, Base44, etc.—you’ll find that the audience is still technical people. Developers, product managers, and designers can understand what’s going on. But not everyday people.

These tools are still missing ingredients HyperCard had that allowed it to be in the general zeitgeist for a while, that enabled users to be programmers again.

They are:

  • Direct manipulation
  • Technical abstraction
  • Local apps

What Today’s Tools Still Miss

Direct Manipulation

As I concluded in my exhaustive AI prompt-to-code tools roundup from April, “We need to be able to directly manipulate components by clicking and modifying shapes on the canvas or changing values in an inspector.” The latency of the roundtrip of prompting the model, waiting for it to think and then generate code, and then rebuild the app is much too long. If you don’t know how to code, every change takes minutes, so building something becomes tedious, not fun.

Tools need to be a canvas-first, not chatbox-first. Imagine a kit of UI elements on the left that you can drag onto the canvas and then configure and style—not unlike WordPress page builders. 

AI is there to do the work for you if you want, but you don’t need to use it.

Hand-drawn sketch of a modern HyperCard-like interface, with a canvas in the center, object palette on the left, and chat panel on the right.

My sketch of the layout of what a modern HyperCard successor could look like. A directly manipulatable canvas is in the center, object palette on the left, and AI chat panel on the right.

Technical Abstraction

For gen pop, I believe that these tools should hide away all the JavaScript, TypeScript, etc. The thing that the user is building should just work.

Additionally, there’s an argument to be made to bring back HyperTalk or something similar. Here is the same password logic I showed earlier, but in modern-day JavaScript:

const password = document.getElementById("Password").value;

if (password === "open sesame") {
  window.location.href = "secret.html";
} else {
  alert("Wrong password.");
} 

No one is going to understand that, much less write something like it.

One could argue that the user doesn’t need to understand that code since the AI will write it. Sure, but code is also documentation. If a user is working on an immersive puzzle game, they need to know the algorithm for the solution. 

As a side note, I think flow charts or node-based workflows are great. Unreal Engine’s Blueprints visual scripting is fantastic. Again, AI should be there to assist.

Unreal Engine Blueprints visual scripting interface, with node blocks connected by wires representing game logic.

Unreal Engine has a visual scripting interface called Blueprints, with node blocks connected by wires representing game logic.

Local Apps

HyperCard’s file format was “stacks.” And stacks could be compiled into applications that can be distributed without HyperCard. With today’s cloud-based AI coding tools, they can all publish a project to a unique URL for sharing. That’s great for prototyping and for personal use, but if you wanted to distribute it as shareware or donation-ware, you’d have to map it to a custom domain name. It’s not straightforward to purchase from a registrar and deal with DNS records.

What if these web apps can be turned into a single exchangeable file format like “.stack” or some such? Furthermore, what if they can be wrapped into executable apps via Electron?

Rip, Mix, Burn

Lovable, v0, and others already have sharing and remixing built in. This ethos is great and builds on the philosophies of the hippie computer scientists. In addition to fostering a remix culture, I imagine a centralized store for these apps. Of course, those that are published as runtime apps can go through the official Apple and Google stores if they wish. Finally, nothing stops third-party stores, similar to the collections of stacks that used to be distributed on CD-ROMs.

AI as Collaborator, Not Interface

As mentioned, AI should not be the main UI for this. Instead, it’s a collaborator. It’s there if you want it. I imagine that it can help with scaffolding a project just by describing what you want to make. And as it’s shaping your app, it’s also explaining what it’s doing and why so that the user is learning and slowly becoming a programmer too.

Democratizing Programming

When my daughter was in middle school, she used a site called Quizlet to make flash cards to help her study for history tests. There were often user-generated sets of cards for certain subjects, but there were never sets specifically for her class, her teacher, that test. With this HyperCard of the future, she would be able to build something custom in minutes.

Likewise, a small business owner who runs an Etsy shop selling T-shirts can spin up something a little more complicated to analyze sales and compare against overall trends in the marketplace.

And that same Etsy shop owner could sell the little app they made to others wanting the same tool for for their stores.

The Future Is Close

Scene from TRON showing a program with raised arms, looking upward at a floating disc in a beam of light.

Tron talks to his user, Alan Bradley, via a communication beam.

In an interview with Garry Tan of Y Combinator in June, Michael Truell, the CEO of Anysphere, which is the company behind Cursor, said his company’s mission is to “replace coding with something that’s much better.” He acknowledged that coding today is really complicated:

Coding requires editing millions of lines of esoteric formal programming languages. It requires doing lots and lots of labor to actually make things show up on the screen that are kind of simple to describe.

Truell believes that in five to ten years, making software will boil down to “defining how you want the software to work and how you want the software to look.”

In my opinion, his timeline is a bit conservative, but maybe he means for professionals. I wonder if something simpler will come along sooner that will capture the imagination of the public, like ChatGPT has. Something that will encourage playing and tinkering like HyperCard did.

There’s a third sequel to TRON that’s coming out soon—TRON: Ares. In a panel discussion in the 5,000-seat Hall H at San Diego Comic-Con earlier this summer, Steven Lisberger, the creator of the franchise provided this warning about AI, “Let’s kick the technology around artistically before it kicks us around.” While he said it as a warning, I think it’s an opportunity as well.

AI opens up computer “programming” to a much larger swath of people—hell, everyone. As an industry, we should encourage tinkering by building such capabilities into our products. Not UIs on the fly, but mods as necessary. We should build platforms that increase the pool of users from technical people to everyday users like students, high school teachers, and grandmothers. We should imagine a world where software is as personalizable as a notebook—something you can write in, rearrange, and make your own. And maybe users can be programmers once again.

Simon Sherwood, writing in The Register:

Amazon Web Services CEO Matt Garman has suggested firing junior workers because AI can do their jobs is “the dumbest thing I’ve ever heard.”

Garman made that remark in conversation with AI investor Matthew Berman, during which he talked up AWS’s Kiro AI-assisted coding tool and said he’s encountered business leaders who think AI tools “can replace all of our junior people in our company.”

That notion led to the “dumbest thing I’ve ever heard” quote, followed by a justification that junior staff are “probably the least expensive employees you have” and also the most engaged with AI tools.

“How’s that going to work when ten years in the future you have no one that has learned anything,” he asked. “My view is you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it, just as much as you ever have.”

Yup. I agree.

preview-1756189648262.jpg

AWS CEO says AI replacing junior staff is 'dumbest idea'

They're cheap and grew up with AI … so you're firing them why?

theregister.com icontheregister.com

As a follow-up to yesterday’s item on how Google’s AI overviews are curtailing traffic to websites by as much as 25%, here is a link to Nielsen Norman Group’s just-published study showing that generative AI is reshaping search.

Kate Moran, Maria Rosala and Josh Brown:

While AI offers compelling shortcuts around tedious research tasks, it isn’t close to completely replacing traditional search. But, even when people are using traditional search, the AI-generated overview that now tops almost all search-results pages steals a significant amount of attention and often shortcuts the need to visit the actual pages.

They write that users have developed a way to search over the years, skipping sponsored results and heading straight for the organic links. Users also haven’t completely broken free of traditional Google Search, now adding chatbots to the mix:

While generative AI does offer enough value to change user behaviors, it has not replaced traditional search entirely. Traditional search and AI chats were often used in tandem to explore the same topic and were sometimes used to fact-check each other.

All our participants engaged in traditional search (using keywords, evaluating results pages, visiting content pages, etc.) multiple times in the study. Nobody relied entirely on genAI’s responses (in chat or in an AI overview) for all their information-seeking needs.

In many ways, I think this is smart. Unless “web search” is happening, I tend double-check ChatGPT and Claude, especially for anything historical and mission-critical. I also like Perplexity for that fact—because it shows me its receipts by giving me sources.

preview-1755581621661.png

How AI Is Changing Search Behaviors

Our study shows that generative AI is reshaping search, but long-standing habits persist. Many users still default to Google, giving Gemini a fighting chance.

nngroup.com iconnngroup.com

Jessica Davies reports that new publisher data suggests that some sites are getting 25% less traffic from Google than the previous year.

Writing in Digiday:

Organic search referral traffic from Google is declining broadly, with the majority of DCN member sites — spanning both news and entertainment — experiencing traffic losses from Google search between 1% and 25%. Twelve of the respondent companies were news brands, and seven were non-news.

Jason Kint, CEO of DCN, says that this is a “direct consequence of Google AI Overviews.”

I wrote previously about the changing economics of the web here, here, and here.

And related, Eric Mersch writes in a LinkedIn post that Monday.com’s stock fell 23% because co-CEO Roy Mann said, “We are seeing some softness in the market due to Google algorithm,” during their Q2 earnings call and the analysts just kept hammering him and the CFO about how the algo changes might affect customer acquisition.

Analysts continued to press the issue, which caught company management completely off guard. Matthew Bullock from Bank of America Merrill Lynch asked frankly, “And then help us understand, why call this out now? How did the influence of Google SEO disruption change this quarter versus 1Q, for example?” The CEO could only respond, “So look, I think like we said, we optimize in real-time. We just budget daily,” implying that they were not aware of the problem until they saw Q2 results.

This is the first public sign that the shift from Google to AI-powered searches is having an impact.

preview-1755493440980.jpg

Google AI Overviews linked to 25% drop in publisher referral traffic, new data shows

The majority of Digital Content Next publisher members are seeing traffic losses from Google search between 1% and 25% due to AI Overviews.

digiday.com icondigiday.com

I enjoyed this interview with Notion’s CEO, Ivan Zhao over at the Decoder podcast, with substitute host, Casey Newton. What I didn’t quite get when I first used Notion was the “LEGO” aspect of it. Their vision is to build business software that is highly malleable and configurable to do all sorts of things. Here’s Zhao:

Well, because it didn’t quite exist with software. If you think about the last 15 years of [software-as-a-service], it’s largely people building vertical point solutions. For each buyer, for each point, that solution sort of makes sense. The way we describe it is that it’s like a hard plastic solution for your problem, but once you have 20 different hard plastic solutions, they sort of don’t fit well together. You cannot tinker with them. As an end user, you have to jump between half a dozen of them each day.

That’s not quite right, and we’re also inspired by the early computing pioneers who in the ‘60s and ‘70s thought that computing should be more LEGO-like rather than like hard plastic. That’s what got me started working on Notion a long time ago, when I was reading a computer science paper back in college.

From a user experience POV, Notion is both simple and exceedingly complicated. Taking notes is easy. Building the system for a workflow, not so much.

In the second half, Newton (gently) presses Zhao on the impact of AI on the workforce and how productivity software like Notion could replace headcount.

Newton: Do you think that AI and Notion will get to a point where executives will hire fewer people, because Notion will do it for them? Or are you more focused on just helping people do their existing jobs?

Zhao: We’re actually putting out a campaign about this, in the coming weeks or months. We want to push out a more amplifying, positive message about what Notion can do for you. So, imagine the billboard we’re putting out. It’s you in the center. Then, with a tool like Notion or other AI tools, you can have AI teammates. Imagine that you and I start a company. We’re two co-founders, we sign up for Notion, and all of a sudden, we’re supplemented by other AI teammates, some taking notes for us, some triaging, some doing research while we’re sleeping.

Zhao dodges the “hire fewer people” part of the question and instead, answers with “amplifying” people or making them more productive.

preview-1755062355751.jpg

Notion CEO Ivan Zhao wants you to demand better from your tools

Notion’s Ivan Zhao on AI agents, productivity, and how software will change in the future.

theverge.com icontheverge.com

Yesterday, OpenAI launched GPT-5, their latest and greatest model that replaces the confusing assortment of GPT-4o, o3, o4-mini, etc. with just two options: GPT-5 and GPT-5 pro. The reasoning is built in and the new model is smart enough to know what to think harder, or when a quick answer suffices.

Simon Willison deep dives into GPT-5, exploring its mix of speed and deep reasoning, massive context limits, and competitive pricing. He sees it as a steady, reliable default for everyday work rather than a radical leap forward:

I’ve mainly explored full GPT-5. My verdict: it’s just good at stuff. It doesn’t feel like a dramatic leap ahead from other LLMs but it exudes competence—it rarely messes up, and frequently impresses me. I’ve found it to be a very sensible default for everything that I want to do. At no point have I found myself wanting to re-run a prompt against a different model to try and get a better result.

It’s a long technical read but interesting nonetheless.

preview-1754630277862.jpg

GPT-5: Key characteristics, pricing and model card

I’ve had preview access to the new GPT-5 model family for the past two weeks (see related video) and have been using GPT-5 as my daily-driver. It’s my new favorite …

simonwillison.net iconsimonwillison.net

Jay Hoffman, writing in his excellent The History of the Web website, reflects on Kevin Kelly’s 2005 Wired piece that celebrated the explosive growth of blogging—50 million blogs, one created every two seconds—and predicted a future powered by open participation and user-created content. Kelly was right about the power of audiences becoming creators, but he missed the crucial detail: 2005 would mark the peak of that open web participation before everyone moved into centralized platforms.

There are still a lot of blogs, 600 million by some accounts. But they have been supplanted over the years by social media networks. Commerce on the web has consolidated among fewer and fewer sites. Open source continues to be a major backbone to web technologies, but it is underfunded and powered almost entirely by the generosity of its contributors. Open API’s barely exist. Forums and comment sections are finding it harder and harder to beat back the spam. Users still participate in the web each and every day, but it increasingly feels like they do so in spite of the largest web platforms and sites, not because of them.

My blog—this website—is a direct response to the consolidation. This site and its content are owned and operated by me and not stuck behind a login or paywall to be monetized by Meta, Medium, Substack, or Elon Musk. That is the open web.

Hoffman goes on to say, “The web was created for participation, by its nature and by its design. It can’t be bottled up long.” He concludes with:

Independent journalists who create unique and authentic connections with their readers are now possible. Open social protocols that experts truly struggle to understand, is being powered by a community that talks to each other.

The web is just people. Lots of people, connected across global networks. In 2005, it was the audience that made the web. In 2025, it will be the audience again.

preview-1754534872678.jpg

We Are Still the Web

Twenty years ago, Kevin Kelly wrote an absolutely seminal piece for Wired. This week is a great opportunity to look back at it.

thehistoryoftheweb.com iconthehistoryoftheweb.com
Portraits of five recent design graduates. From top left to right: Ashton Landis, wearing a black sleeveless top with long blonde hair against a dark background; Erika Kim, outdoors in front of a mountain at sunset, smiling in a fleece-collared jacket; Emma Haines, smiling and looking over her shoulder in a light blazer, outdoors; Bottom row, left to right: Leah Ray, in a black-and-white portrait wearing a black turtleneck, looking ahead, Benedict Allen, smiling in a black jacket with layered necklaces against a light background

Meet the 5 Recent Design Grads and 5 Design Educators

For my series on the Design Talent Crisis (see Part IPart II, and Part III) I interviewed five recent graduates from California College of the Arts (CCA) and San Diego City College. I’m an alum of CCA and I used to teach at SDCC. There’s a mix of folks from both the graphic design and interaction design disciplines. 

Meet the Grads

If these enthusiastic and immensely talented designers are available and you’re in a position to hire, please reach out to them!

Benedict Allen

Benedict Allen, smiling in a black jacket with layered necklaces against a light background

Benedict Allen is a Los Angeles-based visual designer who specializes in creating compelling visual identities at the intersection of design, culture, and storytelling. With a strong background in apparel graphics and branding, Benedict brings experience from his freelance work for The Hunt and Company—designing for a major automotive YouTuber’s clothing line—and an internship at Pureboost Energy Drink Mix. He is skilled in a range of creative tools including Photoshop, Illustrator, Figma, and AI image generation. Benedict’s approach is rooted in history and narrative, resulting in clever and resonant design solutions. He holds an Associate of Arts in Graphic Design from San Diego City College and has contributed to the design community through volunteer work with AIGA San Diego Tijuana.

Emma Haines

Emma Haines, smiling and looking over her shoulder in a light blazer, outdoors

Emma Haines is a UX and interaction designer with a background in computer science, currently completing her MDes in Interaction Design at California College of the Arts. She brings technical expertise and a passion for human-centered design to her work, with hands-on experience in integrating AI into both the design process and user-facing projects. Emma has held roles at Mphasis, Blink UX, and Colorado State University, and is now seeking full-time opportunities where she can apply her skills in UX, UI, or product design, particularly in collaborative, fast-paced environments.

Erika Kim

Erika Kim, outdoors in front of a mountain at sunset, smiling in a fleece-collared jacket

Erika Kim is a passionate UI/UX and product designer based in Poway, California, with a strong foundation in both visual communication and thoughtful problem-solving. A recent graduate of San Diego City College’s Interaction & Graphic Design program, Erika has gained hands-on experience through internships at TritonNav, Four Fin Creative, and My Rental Spot, as well as a year at Apple in operations and customer service roles. Her work has earned her recognition, including a Gold Winner award at The One Club Student Awards for her project “Gatcha Eats.” Erika’s approach to design emphasizes clarity, collaboration, and the power of well-crafted wayfinding—a passion inspired by her fascination with city and airport signage. She is fluent in English and Korean, and is currently open to new opportunities in user experience and product design.

Ashton Landis

Ashton Landis, wearing a black sleeveless top with long blonde hair against a dark background

Ashton Landis is a San Francisco-based graphic designer with a passion for branding, typography, and visual storytelling. A recent graduate of California College of the Arts with a BFA in Graphic Design and a minor in ecological practices, Ashton has developed expertise across branding, UI/UX, design strategy, environmental graphics, and more. She brings a people-centered approach to her work, drawing on her background in photography to create impactful and engaging design solutions. Ashton’s experience includes collaborating with Bay Area non-profits to build participatory identity systems and improve community engagement. She is now seeking new opportunities to grow and help brands make a meaningful impact.

Leah Ray

Leah Ray, , in a black-and-white portrait wearing a black turtleneck, looking ahead

Leah (Xiayi Lei) Ray is a Beijing-based visual designer currently working at Kuaishou Technology, with a strong background in impactful graphic design that blends logic and creativity. She holds an MFA in Design and Visual Communications from California College of the Arts, where she also contributed as a teaching assistant and poster designer. Leah’s experience spans freelance work in branding, identity, and book cover design, as well as roles in UI/UX and visual development at various companies. She is fluent in English and Mandarin, passionate about education, arts, and culture, and is recognized for her thoughtful, novel approach to design.

Meet the Educators

Sean Bacon

Sean Bacon, smiling in a light button-down against a blue-gray background

Sean Bacon is a professor, passionate designer and obsessive typophile who teaches a wide range of classes at San Diego City College. He also helps direct and manage the graphic design program and its administrative responsibilities. He teaches a wide range of classes and always strives to bring excellence to his students’ work. He brings his wealth of experiences and insight to help produce many of the award winning portfolios from City. He has worked for The Daily Aztec, Jonathan Segal Architecture, Parallax Visual Communication and Silent Partner. He attended San Diego City College, San Diego State and completed his masters at Savannah College of Art and Design. 

Eric Heiman

Eric Heiman, in profile wearing a flat cap and glasses, black and white photo

Eric Heiman is principal and co-founder of the award-winning, oft-exhibited design studio Volume Inc. He also teaches at California College of the Arts (CCA) where he currently manages TBD*, a student-staffed design studio creating work to help local Bay Area nonprofits and civic institutions. Eric also writes about design every so often, has curated one film festival, occasionally podcasts about classic literature, and was recently made an AIGA Fellow for his contribution to raising the standards of excellence in practice and conduct within the Bay Area design community. 

Elena Pacenti

Portrait of Elena Pacenti, smiling with long blonde hair, wearing a black top, in soft natural light.

Elena Pacenti is a seasoned design expert with over thirty years of experience in design education, research, and international projects. Currently the Director of the MDes Interaction Design program at California College of the Arts, she has previously held leadership roles at NewSchool of Architecture & Design and Domus Academy, focusing on curriculum development, faculty management, and strategic planning. Elena holds a PhD in Industrial Design and a Master’s in Architecture from Politecnico di Milano, and is recognized for her expertise in service design, strategic design, and user experience. She is passionate about leading innovative, complex projects where design plays a central role.

Bradford Prairie

Bradford Prairie, smiling in a jacket and button-down against a soft purple background

Bradford Prairie has been teaching at San Diego City College for nine years, starting as an adjunct instructor while simultaneously working as a professional designer and creative director at Ignyte, a leading branding agency. What made his transition unique was Ignyte’s support for his educational aspirations—they understood his desire to prioritize teaching and eventually move into it full-time. This dual background in industry and academia allows him to bring real-world expertise into the classroom while maintaining his creative practice.

Josh Silverman

Josh Silverman, smiling in a striped shirt against a dark background

For three decades, Josh Silverman has built bridges between entrepreneurship, design education, and designers—always focused on helping people find purpose and opportunity. As founder of PeopleWork Partners, he brings a humane design lens to recruiting and leadership coaching, placing emerging leaders at companies like Target, Netflix, and OpenAI, and advising design teams on critique, culture, and operations. He has chaired the MDes program at California College of the Arts, taught and spoken worldwide, and led AIGA chapters. Earlier, he founded Schwadesign, a lean, holacratic studio recognized by The Wall Street Journal and others. His clients span startups, global enterprises, top universities, cities, and non-profits. Josh is endlessly curious about how teams make decisions and what makes them thrive—and is always up for a long bike ride.

Luke Wroblewski, writing in his blog:

Across several of our companies, software development teams are now “out ahead” of design. To be more specific, collaborating with AI agents (like Augment Code) allows software developers to move from concept to working code 10x faster. This means new features become code at a fast and furious pace.

When software is coded this way, however, it (currently at least) lacks UX refinement and thoughtful integration into the structure and purpose of a product. This is the work that designers used to do upfront but now need to “clean up” afterward. It’s like the development process got flipped around. Designers used to draw up features with mockups and prototypes, then engineers would have to clean them up to ship them. Now engineers can code features so fast that designers are ones going back and cleaning them up.

This is what I’ve been secretly afraid of. That we would go back to the times when designers were called in to do cleanup. Wroblewski says:

Instead of waiting for months, you can start playing with working features and ideas within hours. This allows everyone, whether designer or engineer, an opportunity to learn what works and what doesn’t. At its core rapid iteration improves software and the build, use/test, learn, repeat loop just flipped, it didn’t go away.

Yeah, or the feature will get shipped this way and be stuck this way because startups move fast and move on.

My take is that as designers, we need to meet the moment and figure out how to build design systems and best practices into the agentic workflows our developer counterparts are using.

preview-1753725448535.png

AI Has Flipped Software Development

For years, it's been faster to create mockups and prototypes of software than to ship it to production. As a result, software design teams could stay "ahead" of...

lukew.com iconlukew.com

In many ways, this excellent article by Kaustubh Saini for Final Round AI’s blog is a cousin to my essay on the design talent crisis. But it’s about what happens when people “become” developers and only know vibe coding.

The appeal is obvious, especially for newcomers facing a brutal job market. Why spend years learning complex programming languages when you can just describe what you want in plain English? The promise sounds amazing: no technical knowledge required, just explain your vision and watch the AI build it.

In other words, these folks don’t understand the code and, well, bad things can happen.

The most documented failure involves an indie developer who built a SaaS product entirely through vibe coding. Initially celebrating on social media that his “saas was built with Cursor, zero hand written code,” the story quickly turned dark.

Within weeks, disaster struck. The developer reported that “random things are happening, maxed out usage on api keys, people bypassing the subscription, creating random shit on db.” Being non-technical, he couldn’t debug the security breaches or understand what was going wrong. The application was eventually shut down permanently after he admitted “Cursor keeps breaking other parts of the code.”

This failure illustrates the core problem with vibe coding: it produces developers who can generate code but can’t understand, debug, or maintain it. When AI-generated code breaks, these developers are helpless.

I don’t foresee something this disastrous with design. I mean, a newbie designer wielding an AI-enabled Canva or Figma can’t tank a business alone because the client will have eyes on it and won’t let through something that doesn’t work. It could be a design atrocity, but it’ll likely be fine.

This *can *happen to a designer using vibe coding tools, however. Full disclosure: I’m one of them. This site is partially vibe-coded. My Severance fan project is entirely vibe-coded.

But back to the idea of a talent crisis. In the developer world, it’s already happening:

The fundamental problem is that vibe coding creates what experts call “pseudo-developers.” These are people who can generate code but can’t understand, debug, or maintain it. When AI-generated code breaks, these developers are helpless.

In other words, they don’t have the skills necessary to be developers because they can’t do the basics. They can’t debug, don’t understand architecture, have no code review skills, and basically have no fundamental knowledge of what it means to be a programmer. “They miss the foundation that allows developers to adapt to new technologies, understand trade-offs, and make architectural decisions.”

Again, assuming our junior designers have the requisite fundamental design skills, not having spent time developing their craft and strategic skills through experience will be detrimental to them and any org that hires them.

preview-1753377392986.jpg

How AI Vibe Coding Is Destroying Junior Developers' Careers

New research shows developers think AI makes them 20% faster but are actually 19% slower. Vibe coding is creating unemployable pseudo-developers who can't debug or maintain code.

finalroundai.com iconfinalroundai.com

Sonos announced yesterday that interim CEO Tom Conrad was made permanent. From their press release:

Sonos has achieved notable progress under Mr. Conrad’s leadership as Interim CEO. This includes setting a new standard for the quality of Sonos’ software and product experience, clearing the path for a robust new product pipeline, and launching innovative new software enhancements to flagship products Sonos Ace and Arc Ultra.

Conrad surely navigated this landmine well after the disastrous app redesign that wiped almost $500 million from the company’s market value and cost CEO Patrick Spence his job. My sincere hope is that Conrad continues to rebuild Sonos’s reputation by continuing to improve their products.

Sonos Appoints Tom Conrad as Chief Executive Officer

Sonos Website

sonos.com iconsonos.com

In case you missed it, there’s been a major shift in the AI tool landscape.

On Friday, OpenAI’s $3 billion offer to acquire AI coding tool Windsurf expired. Windsurf is the Pepsi to Cursor’s Coke. They’re both IDEs, the programming desktop application that software developers use to code. Think of them as supercharged text editors but with AI built in.

On Friday evening, Google announced that it had hired Windsurf’s CEO Varun Mohan, co-founder Douglas Chen, and several key researchers for $2.4 billion.

On Monday, Cognition, the company behind Devin, the self-described “AI engineer” announced that it had acquired Windsurf for an undisclosed sum, but noting that its remaining 250 employees will “participate financially in this deal.”

Why does this matter to designers?

The AI tools market is changing very rapidly. With AI helping to write these applications, their numbers and features are always increasing—or in this case, maybe consolidating. Choose wisely before investing too deeply into one particular tool. The one piece of advice I would give here is to avoid lock-in. Don’t get tied to a vendor. Ensure that your tool of choice can export your work—the code.

Jason Lemkin has more on the business side of things and how it affects VC-backed startups.

preview-1752536770924.png

Did Windsurf Sell Too Cheap? The Wild 72-Hour Saga and AI Coding Valuations

The last 72 hours in AI coding have been nothing short of extraordinary. What started as a potential $3 billion OpenAI acquisition of Windsurf ended with Google poaching Windsurf’s CEO and co…

saastr.com iconsaastr.com

Geoffrey Litt, Josh Horowitz, Peter van Hardenberg, and Todd Matthews writing a paper for research lab Ink & Switch, offer a great, well-thought piece on what they call “malleable software.”

We envision a new kind of computing ecosystem that gives users agency as co-creators. … a software ecosystem where anyone can adapt their tools to their needs with minimal friction. … When we say ‘adapting tools’ we include a whole range of customizations, from making small tweaks to existing software, to deep renovations, to creating new tools that work well in coordination with existing ones. Adaptation doesn’t imply starting over from scratch.

In their paper, they use analogies like kitchen tools and tool arrangement in a workshop to explore their idea. With regard to the current crop of AI prompt-to-code tools

We think these developments hold exciting potential, and represent a good reason to pursue malleable software at this moment. But at the same time, AI code generation alone does not address all the barriers to malleability. Even if we presume that every computer user could perfectly write and edit code, that still leaves open some big questions.

How can users tweak the existing tools they’ve installed, rather than just making new siloed applications? How can AI-generated tools compose with one another to build up larger workflows over shared data? And how can we let users take more direct, precise control over tweaking their software, without needing to resort to AI coding for even the tiniest change? None of these questions are addressed by products that generate a cloud-hosted application from a prompt.

Kind of a different take than the “personal software” we’ve seen written about before.

preview-1752208778544.jpg

Malleable software: Restoring user agency in a world of locked-down apps

The original promise of personal computing was a new kind of clay. Instead, we got appliances: built far away, sealed, unchangeable. In this essay, we envision malleable software: tools that users can reshape with minimal friction to suit their unique needs.

inkandswitch.com iconinkandswitch.com

This post has been swimming in my head since I read it. Elena Verna, who joined Lovable just over a month ago to lead marketing and growth, writing in her newsletter, observes that everyone at the company is an AI-native employee. “An AI-native employee isn’t someone who ‘uses AI.’ It’s someone who defaults to AI,” she says.

On how they ship product:

Here, when someone wants to build something (anything) - from internal tools, to marketing pages, to writing production code - they turn to AI and… build it. That’s it.

No headcount asks. No project briefs. No handoffs. Just action.

At Lovable, we’re mostly building with… Lovable. Our Shipped site is built on Lovable. I’m wrapping hackathon sponsorship intake form in Lovable as we speak. Internal tools like credit giveaways and influencer management? Also Lovable (soon to be shared in our community projects so ya’ll can remix them too). On top of that, engineering is using AI extensively to ship code fast (we don’t even really have Product Managers, so our engineers act as them).

I’ve been hearing about more and more companies operating this way. Crazy time to be alive.

More on this topic in a future long-form post.

preview-1752160625907.png

The rise of the AI-native employee

Managers without vertical expertise, this is your extinction call

elenaverna.com iconelenaverna.com

John Calhoun joined Apple 30 years ago as a programmer to work on the Color Picker.

Having never written anything in assembly, you can imagine how overjoyed I was. It’s not actually a very accurate analogy, but imagine someone handing you a book in Chinese and asking you to translate it into English (I’m assuming here that you don’t know Chinese of course). Okay, it wasn’t that hard, but maybe you get a sense that this was quite a hurdle that I would have to overcome.

Calhoun was given an old piece of code and tasked with updating it. Instead, he translated it into a programming language he knew—C—and then decided to add to the feature. He explains:

I disliked HSL as a color space, I preferred HSV (Hue, Saturation, Value) because when I did artwork I was more comfortable thinking about color in those terms. So writing an HSV color picker was on my short list.

When I had my own color picker working I think I found that it was kind of fun. Perhaps for that reason, I struck out again and wrote another color picker. The World Wide Web (www) was a rather new thing that seemed to be catching on, so I naturally thought that an HTML color picker made sense. So I tackled that one as well. It was more or less the RGB color picker but the values were in hexadecimal and a combined RGB string value like “#FFCC33” was made easy to copy for the web designer.

So an engineer decided, all on his own, that he’d add a couple extra features. Including the fun crayon picker:

On a roll, I decided to also knock out a “crayon picker”. At this point, to be clear, the color picker was working and I felt I understood it well enough. As I say, I was kind of just having some fun now.

Screenshot of a classic Mac OS color picker showing the “Crayon Picker” tab. A green color named “Watercress” is selected, replacing the original orange color. Options include CMYK, HLS, and HSV pickers on the left.

And Calhoun makes this point:

It was frankly a thing I liked about working for Apple in those days. The engineers were the one’s driving the ship. As I said, I wrote an HSV picker because it was, I thought, a more intuitive color space for artists. I wrote the HTML color picker because of the advent of the web. And I wrote the crayon picker because it seemed to me to be the kind of thing Apple was all about: HSL, RGB — these were kind of nerdy color spaces — a box of crayons is how the rest of us picked colors.

Making software—especially web software—has matured since then, with product managers and designers now collaborating closely with engineers. But with AI coding assistants, the idea of an individual contributor making solo decisions and shipping code might become de rigueur again.

Man sitting outside 2 Infinite Loop, Apple’s former headquarters in Cupertino, holding a book with an ID badge clipped to his jeans.

Almost Fired

I was hired on at Apple in October of 1995. This was what I refer to as Apple’s circling the drain period. Maybe you remember all the doomsaying — speculation that Apple was going to be shuttering soon. It’s a little odd perhaps then that they were hiring at all but apparently Apple reasoned that they nonetheless needed another “graphics engineer” to work on the technology known as QuickdrawGX. I was then a thirty-one year old programmer who lived in Kansas and wrote games for the Macintosh — surely, Apple thought, I would be a good fit for the position.

engineersneedart.com iconengineersneedart.com

Let’s continue down Mac memory lane with this fun post from Basic Apple Guy:

With macOS 26, Apple has announced a dramatically new look to their UI: Liquid Glass. Solid material icon elements give way to softer, shinier, glassier icons. The rounded rectangle became slightly more rounded, and Apple eliminated the ability for icon elements to extend beyond the icon rectangle (as seen in the current icons for GarageBand, Photo Booth, Dictionary, etc.).

With this release being one of the most dramatic visual overhauls of macOS’s design, I wanted to begin a collection chronicling the evolution of the system icons over the years. I’ve been rolling these out on social media over the past week and will continue to add to and update this collection slowly over the summer. Enjoy!

preview-1752036853593.png

macOS Icon History

Documenting the evolution of macOS system icons over the past several decades.

basicappleguy.com iconbasicappleguy.com
Page 1 of 4