
8 design breakthroughs defining AI’s future
How key interface decisions are shaping the next era of human-computer interaction

It’s 11 degrees Fahrenheit as I step off the plane at Toronto Pearson International. I’ve been up for nearly 24 hours and am about to trek through the gates toward Canadian immigration. Getting here from 73-degree San Diego was a significant challenge. What would be a quick five-hour direct flight turned into a five-hour delay, then cancelation, and then a rebook onto a red-eye through SFO. And I can’t sleep on planes. On top of that, I’ve been recovering from the flu, so my head was still very congested, and the descents from two flights were excruciating.
After going for a short secondary screening for who knows what reason—the second Canada Border Services Agency officer didn’t know either—I make my way to the UP Express train and head towards downtown Toronto. Before reaching Union Station, the train stops at the Weston and Bloor stations, picking up scarfed, ear-muffed, and shivering commuters. I disembark at Union Station, find my way to the PATH, and headed towards the CN Tower. I’m staying at the Marriott attached to the Blue Jays stadium.
Appearing on Joe Rogan’s podcast, this week, Meta CEO Mark Zuckerberg said that Apple “[hasn’t] really invented anything great in a while. Steve Jobs invented the iPhone and now they’re just kind of sitting on it 20 years later."
Let's take a look at some hard metrics, shall we?
I did a search of the USPTO site for patents filed by Apple and Meta since 2007. In that time period, Apple filed for 44,699 patents. Meta, nee Facebook, filed for 4,839, or about 10% of Apple’s inventions.


Fabricio Teixeira and Caio Braga, in their annual The State of UX report:
Despite all the transformations we’re seeing, one thing we know for sure: Design (the craft, the discipline, the science) is not going anywhere. While Design only became a more official profession in the 19th century, the study of how craft can be applied to improve business dates back to the early 1800s. Since then, only one thing has remained constant: how Design is done is completely different decade after decade. The change we’re discussing here is not a revolution, just an evolution. It’s simply a change in how many roles will be needed and what they will entail. “Digital systems, not people, will do much of the craft of (screen-level) interaction design.”
Scary words for the UX design profession as it stares down the coming onslaught of AI. Our industry isn’t the first one to face this—copywriters, illustrators, and stock photographers have already been facing the disruption of their respective crafts. All of these creatives have had to pivot quickly. And so will we.

I recently read a post on Threads in which Stephen Beck wonders why the New York Times needs an external advertising agency when it already has an award-winning agency in-house. You can read the back-and-forth in the thread itself, but I think Nina Alter’s reply sums it up best:
Creatives need to be free to bring new perspectives. Drink other kool-aid. That’s much of the value in agencies.
This all got me thinking about the differences between working in-house and at an agency. As a designer who began my career bouncing from agency to agency before settling in-house, I’ve seen both sides of this debate firsthand. Many of my designer friends have had similar paths. So, I’ll speak from that perspective. It’s biased and probably a little outdated since I haven’t worked at an agency since 2020, and that was one that I owned.

I’ve always been a maker at heart—someone who loves to bring ideas to life. When AI exploded, I saw a chance to create something new and meaningful for solo designers. But making Griffin AI was only half the battle…
About a year ago, a few months after GPT-4 was released and took the world by storm, I worked on several AI features at Convex. One was a straightforward email drafting feature but with a twist. We incorporated details we knew about the sender—such as their role and offering—and the email recipient, as well as their role plus info about their company’s industry. To accomplish this, I combined some prompt engineering and data from our data providers, shaping the responses we got from GPT-4.
Playing with this new technology was incredibly fun and eye-opening. And that gave me an idea. Foundational large language models (LLMs) aren’t great yet for factual data retrieval and analysis. But they’re pretty decent at creativity. No, GPT, Claude, or Gemini couldn’t write an Oscar-winning screenplay or win the Pulitzer Prize for poetry, but it’s not bad for starter ideas that are good enough for specific use cases. Hold that thought.

Apple finally launched its Vision Pro “spatial computing” device in early February. We immediately saw TikTok memes of influencers being ridiculous. I wrote about my hope for the Apple Vision Pro back in June 2023, when it was first announced. When preorders opened for Vision Pro in January, I told myself I wouldn’t buy it. I couldn’t justify the $3,500 price tag. Out of morbid curiosity, I would lurk in the AVP subreddits to live vicariously through those who did take the plunge.
After about a month of reading all the positives from users about the device, I impulsively bought an Apple Vision Pro. I placed my order online at noon and picked it up just two hours later at an Apple Store near me.
Many great articles and YouTube videos have already been produced, so this post won’t be a top-to-bottom review of the Apple Vision Pro. Instead, I’ll try to frame it from my standpoint as someone who has designed user experiences for VR.

After years of rumors and speculation, Apple finally unveiled their virtual reality headset yesterday in a classic “One more thing…” segment in their keynote. Dubbed Apple Vision Pro, this mixed reality device is perfectly Apple: it’s human-first. It’s centered around extending human productivity, communication, and connection. It’s telling that one of the core problems they solved was the VR isolation problem. That’s the issue where users of VR are isolated from the real world; they don’t know what’s going on, and the world around them sees that. Insert meme of oblivious VR user here. Instead, with the Vision Pro, when someone else is nearby, they show through the interface. Additionally, an outward-facing display shows the user’s eyes. These two innovative features help maintain the basic human behavior of acknowledging each other’s presence in the same room.

I know a thing or two about VR and building practical apps for VR. A few years ago, in the mid-2010s, I cofounded a VR startup called Transported. My cofounders and I created a platform for touring real estate in VR. We wanted to help homebuyers and apartment hunters more efficiently shop for real estate. Instead of zigzagging across town running to multiple open houses on a Sunday afternoon, you could tour 20 homes in an hour on your living room couch. Of course, “virtual tours” existed already. There were cheap panoramas on real estate websites and “dollhouse” tours created using Matterport technology. Our tours were immersive; you felt like you were there. It was the future! There were several problems to solve, including 360° photography, stitching rooms together, building a player, and then most importantly, distribution. Back in 2015–2016, our theory was that Facebook, Google, Microsoft, Sony, and Apple would quickly make VR commonplace because they were pouring billions of R&D and marketing dollars into the space. But it turned out we were a little ahead of our time.
The design blog that connects the dots others miss. Written by Roger Wong.
If you’re new here, check out what others are reading in the Popular feed.