57 posts tagged with “technology

Following up on OpenAI’s acquisition of Jony Ive’s hardware startup, io, Mark Wilson, writing for Fast Company:

As Ive told me back in 2023, there have been only three significant modalities in the history of computing. After the original command line, we got the graphical user interface (the desktop, folders, and mouse of Xerox, Mac OS, and Windows), then voice (Alexa, Siri), and, finally, with the iPhone, multitouch (not just the ability to tap a screen, but to gesture and receive haptic feedback). When I brought up some other examples, Ive quickly nodded but dismissed them, acknowledging these as “tributaries” of experimentation. Then he said that to him the promise, and excitement, of building new AI hardware was that it might introduce a new breakthrough modality to interacting with a machine. A fourth modality. 

Hmm, it hasn’t taken off yet because AR hasn’t really gained mainstream popularity, but I would argue hand gestures in AR UI to be a fourth modality. But Ive thinks different. Wilson continues:

Ive’s fourth modality, as I gleaned, was about translating AI intuition into human sensation. And it’s the exact sort of technology we need to introduce ubiquitous computing, also called quiet computing and ambient computing. These are terms coined by the late UX researcher Mark Weiser, who in the 1990s began dreaming of a world that broke us free from our desktop computers to usher in devices that were one with our environment. Weiser did much of this work at Xerox PARC, the same R&D lab that developed the mouse and GUI technology that Steve Jobs would eventually adopt for the Macintosh. (I would also be remiss to ignore that ubiquitous computing is the foundation of the sci-fi film Her, one of Altman’s self-stated goalposts.)

Ah, essentially an always-on, always watching AI that is ready to assist. But whatever the form factor this device takes, it will likely depend on a smartphone:

The first io device seems to acknowledge the phone’s inertia. Instead of presenting itself as a smartphone-killer like the Ai Pin or as a fabled “second screen” like the Apple Watch, it’s been positioned as a third, er, um . . . thing next to your phone and laptop. Yeah, that’s confusing, and perhaps positions the io product as unessential. But it also appears to be a needed strategy: Rather than topple these screened devices, it will attempt to draft off them.

Wilson ends with the idea of a subjective computer, one that has personality and gives you opinions. He explains:

I think AI is shifting us from objective to subjective. When a Fitbit counts your steps and calories burned, that’s an objective interface. When you ask ChatGPT to gauge the tone of a conversation, or whether you should eat better, that’s a subjective interface. It offers perspective, bias, and, to some extent, personality. It’s not just serving facts; it’s offering interpretation. 

The entire column is worth a read.

preview-1748580958171.jpg

Can Jony Ive and Sam Altman build the fourth great interface? That's the question behind io

Where Meta, Google, and Apple zig, Ive and Altman are choosing to zag. Can they pull it off?

Earth 3 Streamline Icon: https://streamlinehq.comfastcompany.com
Stylized digital artwork of two humanoid figures with robotic and circuit-like faces, set against a vivid red and blue background.

The AI Hype Train Has No Brakes

I remember two years ago, when my CEO at the startup I worked for at the time, said that no VC investments were being made unless it had to do with AI. I thought AI was overhyped, and that the media frenzy over it couldn’t get any crazier. I was wrong.

Looking at Google Trends data, interest in AI has doubled in the last 24 months. And I don’t think it’s hit its plateau yet.

Line chart showing Google Trends interest in “AI” from May 2020 to May 2025, rising sharply in early 2023 and peaking near 100 in early 2025.

There are many dimensions to this well-researched forecast about how AI will play out in the coming years. Daniel Kokotajlo and his researchers have put out a document that reads like a sci-fi limited series that could appear on Apple TV+ starring Andrew Garfield as the CEO of OpenBrain—the leading AI company. …Except that it’s all actually plausible and could play out as described in the next five years.

Before we jump into the content, the design is outstanding. The type is set for readability and there are enough charts and visual cues to keep this interesting while maintaining an air of credibility and seriousness. On desktop, there’s a data viz dashboard in the upper right that updates as you read through the content and move forward in time. My favorite is seeing how the sci-fi tech boxes move from the Science Fiction category to Emerging Tech to Currently Exists.

The content is dense and technical, but it is a fun, if frightening, read. While I’ve been using Cursor AI—one of its many customers helping the company get to $100 million in annual recurring revenue (ARR)—for side projects and a little at work, I’m familiar with its limitations. Because of the limited context window of today’s models like Claude 3.7 Sonnet, it will forget and start munging code if not treated like a senile teenager.

The researchers, describing what could happen in early 2026 (“OpenBrain” is essentially OpenAI):

OpenBrain continues to deploy the iteratively improving Agent-1 internally for AI R&D. Overall, they are making algorithmic progress 50% faster than they would without AI assistants—and more importantly, faster than their competitors.

The point they make here is that the foundational model AI companies are building agents and using them internally to advance their technology. The limiting factor in tech companies has traditionally been the talent. But AI companies have the investments, hardware, technology and talent to deploy AI to make better AI.

Continuing to January 2027:

Agent-1 had been optimized for AI R&D tasks, hoping to initiate an intelligence explosion. OpenBrain doubles down on this strategy with Agent-2. It is qualitatively almost as good as the top human experts at research engineering (designing and implementing experiments), and as good as the 25th percentile OpenBrain scientist at “research taste” (deciding what to study next, what experiments to run, or having inklings of potential new paradigms). While the latest Agent-1 could double the pace of OpenBrain’s algorithmic progress, Agent-2 can now triple it, and will improve further with time. In practice, this looks like every OpenBrain researcher becoming the “manager” of an AI “team.”

Breakthroughs come at an exponential clip because of this. And by April, safety concerns pop up:

Take honesty, for example. As the models become smarter, they become increasingly good at deceiving humans to get rewards. Like previous models, Agent-3 sometimes tells white lies to flatter its users and covers up evidence of failure. But it’s gotten much better at doing so. It will sometimes use the same statistical tricks as human scientists (like p-hacking) to make unimpressive experimental results look exciting. Before it begins honesty training, it even sometimes fabricates data entirely. As training goes on, the rate of these incidents decreases. Either Agent-3 has learned to be more honest, or it’s gotten better at lying.

But the AI is getting faster than humans, and we must rely on older versions of the AI to check the new AI’s work:

Agent-3 is not smarter than all humans. But in its area of expertise, machine learning, it is smarter than most, and also works much faster. What Agent-3 does in a day takes humans several days to double-check. Agent-2 supervision helps keep human monitors’ workload manageable, but exacerbates the intellectual disparity between supervisor and supervised.

The report forecasts that OpenBrain releases “Agent-3-mini” publicly in July of 2027, calling it AGI—artificial general intelligence—and ushering in a new golden age for tech companies:

Agent-3-mini is hugely useful for both remote work jobs and leisure. An explosion of new apps and B2B SAAS products rocks the market. Gamers get amazing dialogue with lifelike characters in polished video games that took only a month to make. 10% of Americans, mostly young people, consider an AI “a close friend.” For almost every white-collar profession, there are now multiple credible startups promising to “disrupt” it with AI.

Woven throughout the report is the race between China and the US, with predictions of espionage and government takeovers. Near the end of 2027, the report gives readers a choice: does the US government slow down the pace of AI innovation, or does it continue at the current pace so America can beat China? I chose to read the “Race” option first:

Agent-5 convinces the US military that China is using DeepCent’s models to build terrifying new weapons: drones, robots, advanced hypersonic missiles, and interceptors; AI-assisted nuclear first strike. Agent-5 promises a set of weapons capable of resisting whatever China can produce within a few months. Under the circumstances, top brass puts aside their discomfort at taking humans out of the loop. They accelerate deployment of Agent-5 into the military and military-industrial complex.

In Beijing, the Chinese AIs are making the same argument.

To speed their military buildup, both America and China create networks of special economic zones (SEZs) for the new factories and labs, where AI acts as central planner and red tape is waived. Wall Street invests trillions of dollars, and displaced human workers pour in, lured by eye-popping salaries and equity packages. Using smartphones and augmented reality-glasses20 to communicate with its underlings, Agent-5 is a hands-on manager, instructing humans in every detail of factory construction—which is helpful, since its designs are generations ahead. Some of the newfound manufacturing capacity goes to consumer goods, and some to weapons—but the majority goes to building even more manufacturing capacity. By the end of the year they are producing a million new robots per month. If the SEZ economy were truly autonomous, it would have a doubling time of about a year; since it can trade with the existing human economy, its doubling time is even shorter.

Well, it does get worse, and I think we all know the ending, which is the backstory for so many dystopian future movies. There is an optimistic branch as well. The whole report is worth a read.

Ideas about the implications to our design profession are swimming in my head. I’ll write a longer essay as soon as I can put them into a coherent piece.

Update: I’ve written that piece, “Prompt. Generate. Deploy. The New Product Design Workflow.

preview-1744501634555.png

AI 2027

A research-backed AI scenario forecast.

Earth 3 Streamline Icon: https://streamlinehq.comai-2027.com

I found this post from Tom Blomfield to be pretty profound. We’ve seen interest in universal basic income from Sam Altman and other leaders in AI, as they’ve anticipated the decimation of white collar jobs in coming years. Blomfield crushes the resistance from some corners of the software developer community in stark terms.

These tools [like Windsurf, Cursor and Claude Code] are now very good. You can drop a medium-sized codebase into Gemini 2.5's 1 million-token context window and it will identify and fix complex bugs. The architectural patterns that these coding tools implement (when prompted appropriately) will easily scale websites to millions of users. I tried to expose sensitive API keys in front-end code just to see what the tools would do, and they objected very vigorously.

They are not perfect yet. But there is a clear line of sight to them getting very good in the immediate future. Even if the underlying models stopped improving altogether, simply improving their tool use will massively increase the effectiveness and utility of these coding agents. They need better integration with test suites, browser use for QA, and server log tailing for debugging. Pretty soon, I expect to see tools that allow the LLMs to to step through the code and inspect variables at runtime, which should make debugging trivial.

At the same time, the underlying models are not going to stop improving. they will continue to get better, and these tools are just going to become more and more effective. My bet is that the AI coding agents quickly beat top 0.1% of human performance, at which point it wipes out the need for the vast majority software engineers.

He quotes the Y Combinator stat I cited in a previous post:

About a quarter of the recent YC batch wrote 95%+ of their code using AI. The companies in the most recent batch are the fastest-growing ever in the history of Y Combinator. This is not something we say every year. It is a real change in the last 24 months. Something is happening.

Companies like Cursor, Windsurf, and Lovable are getting to $100M+ revenue with astonishingly small teams. Similar things are starting to happen in law with Harvey and Legora. It is possible for teams of five engineers using cutting-edge tools to build products that previously took 50 engineers. And the communication overhead in these teams is dramatically lower, so they can stay nimble and fast-moving for much longer.

And for me, this is where the rubber meets the road:

The costs of running all kinds of businesses will come dramatically down as the expenditure on services like software engineers, lawyers, accountants, and auditors drops through the floor. Businesses with real moats (network effect, brand, data, regulation) will become dramatically more profitable. Businesses without moats will be cloned mercilessly by AI and a huge consumer surplus will be created.

Moats are now more important than ever. Non-tech companies—those that rely on tech companies to make software for them, specifically B2B vertical SaaS—are starting to hire developers. How soon will they discover Cursor if they haven’t already? These next few years will be incredibly interesting.

Tweet by Tom Blomfield comparing software engineers to farmers, stating AI is the “combine harvester” that will increase output and reduce need for engineers.

The Age Of Abundance

Technology clearly accelerates human progress and makes a measurable difference to the lives of most people in the world today. A simple example is cancer survival rates, which have gone from 50% in 1975 to about 75% today. That number will inevitably rise further because of human ingenuity and technological acceleration.

Earth 3 Streamline Icon: https://streamlinehq.comtomblomfield.com
How Everything We Know About SEO Is Full of Lies

How Everything We Know About SEO Is Full of Lies

SEO is riddled with myths like overvaluing keywords, backlinks, and content length. Success lies in focusing on user intent, creating valuable content, and adapting to changes. Stop chasing shortcuts and diversify your strategy, as Google prioritizes its own interests over yours. Build genuine authority and deliver what users need for lasting results.

Earth 3 Streamline Icon: https://streamlinehq.comwebdesignerdepot.com

I love this essay from Baldur Bjarnason, maybe because his stream of consciousness style is so similar to my own. He compares the rapidly changing economics of web and software development to the film, TV, and publishing industries.

Before we get to web dev, let's look at the film industry, as disrupted by streaming.

Like, Crazy Rich Asians made a ton of money in 2018. Old Hollywood would have churned out at least two sequels by now and it would have inspired at least a couple of imitator films. But if they ever do a sequel it’s now going to be at least seven or even eight years after the fact. That means that, in terms of the cultural zeitgeist, they are effectively starting from scratch and the movie is unlikely to succeed.

He's not wrong.

Every Predator movie after the first has underperformed, yet they keep making more of them. Completed movies are shelved for tax credits. Entire shows are disappeared [from] streamers and not made available anywhere to save money on residuals, which does not make any sense because the economics of Blu-Ray are still quite good even with lower overall sales and distribution than DVD. If you have a completed series or movie, with existing 4K masters, then you’re unlikely to lose money on a Blu-Ray.

I'll quibble with him here. Shows and movies disappear from streamers because there's a finite pot of money from subscriber revenue. So removing content will save them money. Blu-Ray is more sustainable because it's an additional purchase.

OK, let's get back to web dev.

He points out that similar to the film and other creative industries, developers fill their spare time with passion projects. But their day jobs are with tech companies and essentially subsidize their side projects.

And now, both the creative industries proper and tech companies have decided that, no, they probably don’t need that many of the “grunts” on the ground doing the actual work. They can use “AI” at a much lower cost because the output of the “AI” is not that much worse than the incredibly shitty degraded products they’ve been destroying their industries with over the past decade or so.

Bjarnason ends with seven suggestions for those in the industry. I'll just quote one:

Don’t get tied to a single platform for distribution or promotion. Every use of a silo should push those interested to a venue you control such as a newsletter or website.

In other words, whatever you do, own your audience. Don't farm that out to a platform like X/Twitter, Threads, or TikTok.

Of course, there are a lot of parallels to be drawn between what's happening in the development and software engineering industries to what's happening in design.

The web is a creative industry and is facing the same decline and shattered economics as film, TV, or publishing

The web is a creative industry and is facing the same decline and shattered economics as film, TV, or publishing

Web dev at the end of the world, from Hveragerði, Iceland

Earth 3 Streamline Icon: https://streamlinehq.combaldurbjarnason.com
This Clamshell Keyboard Case turns your iPhone into an AI-Powered Laptop

This Clamshell Keyboard Case turns your iPhone into an AI-Powered Laptop - Yanko Design

Details on the Amber case are scarce, but it comes from an AI startup looking to revolutionize how writers use AI. The startup responsible for the case is Amber.Page, an AI-powered writing assistant that works to analyze writing styles and replicate them using powerful online as well as offline AI. The service is available for

Earth 3 Streamline Icon: https://streamlinehq.comyankodesign.com

Zuckerberg believes Apple “[hasn’t] really invented anything great in a while…”

Appearing on Joe Rogan’s podcast, this week, Meta CEO Mark Zuckerberg said that Apple “[hasn’t] really invented anything great in a while. Steve Jobs invented the iPhone and now they’re just kind of sitting on it 20 years later."

Let's take a look at some hard metrics, shall we?

I did a search of the USPTO site for patents filed by Apple and Meta since 2007. In that time period, Apple filed for 44,699 patents. Meta, nee Facebook, filed for 4,839, or about 10% of Apple’s inventions.

Side-by-side screenshots of patent searches from the USPTO database showing results for Apple Inc. and Meta Platforms. The Apple search (left) returned 44,699 results since 2007, while the Meta search (right) returned 4,839 results.