Skip to content

240 posts tagged with “ai”

I sent this article to both of my kids this week. My daughter is in college studying publishing. My son is a high school senior planning to go into real estate. Neither of them works in tech. That’s exactly why they need to read it.

Matt Shumer has spent six years building an AI startup and investing in the space. He wrote this piece for the people in his life who keep asking “so what’s the deal with AI?”—and getting the sanitized answer:

I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I’ve lost my mind. And for a while, I told myself that was a good enough reason to keep what’s truly happening to myself. But the gap between what I’ve been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.

I know this feeling. I wrote yesterday about how AI is collapsing the gap between design and code and shifting the designer’s value toward taste and orchestration. That essay was for the software design industry. Shumer is writing for everyone else.

His core argument: tech workers have already lived through the disruption that’s coming for every other knowledge-work profession. He explains why tech got hit first:

The AI labs made a deliberate choice. They focused on making AI great at writing code first… because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. That’s why they did it first.

Christina Wodtke agrees something big is happening but thinks Shumer’s timeline for everyone else is off. Programming, she argues, is a near-ideal use case for AI—there’s an ocean of public training data, and code has a built-in quality check: it runs or it doesn’t. Hallucinations get caught by the compiler. Other fields aren’t so clean-cut.

Shumer makes the classic tech-insider mistake: assuming his experience generalizes to everyone else’s. It doesn’t. Ethan Mollick’s “jagged frontier” of AI capability is as jagged as ever. AI is spectacular at some tasks and embarrassingly bad at others, and the pattern doesn’t map to human intuitions about difficulty.

She makes another point that matters for anyone in a creative field:

A nuance Shumer completely misses: industries where there isn’t one right answer but there are better and worse answers may actually fare better with AI. When you’re writing strategy, designing an experience, or crafting a narrative, a “hallucination” isn’t necessarily a bug. It might be an interesting idea.

That maps to what I know is true in design. A wrong answer in code crashes the app. A wrong answer in a design brainstorm might be the seed of something good.

This is why I sent Shumer’s piece to my kids but didn’t tell them to panic. Publishing runs on editorial judgment, taste, and relationships with authors. Real estate depends on physical presence, local knowledge, and trust built over handshakes. Neither field has the clean training data and binary pass/fail that made coding so vulnerable so fast. But that doesn’t mean nothing changes. Wodtke again:

Your job probably won’t disappear. But parts of it will shift, and the timeline depends on your field’s specific relationship to data, verification, and ambiguity. Prepare thoughtfully instead of panicking.

Shumer’s practical advice is modest: use AI one hour a day, experiment with it. Not reading about it, but really using it. I’d add Wodtke’s framing to that: spend the hour figuring out which parts of your work sit on the easy side of the jagged frontier, and which parts don’t. That’s more useful than assuming the whole thing collapses overnight.

I said yesterday that the gap between “designer who orchestrates AI” and “designer who pushes pixels” will be enormous within 12 months. Shumer is making that same argument for every knowledge-work profession. The whole piece is worth your time and maybe worth sharing with someone who’s been resistant to AI. Just keep in mind Wodtke’s nuance.

Matt Shumer" card with gold title, subheading "notes on building ai products, models, and demos", shumer.dev logo and @mattshumer_

Something Big Is Happening

A personal note for non-tech friends and family on what AI is starting to change.

shumer.dev iconshumer.dev
Silhouette of a meditating person beneath a floating iridescent crystal-like structure emitting vertical rainbow light

Product Design Is Changing

I made my first website in Macromedia Dreamweaver in 1999. Its claim to fame was an environment with code on one side and a rudimentary WYSIWYG editor on the other. My site was a simple portfolio site, with a couple of animated GIFs thrown in for some interest. Over the years, I used other tools to create for the web, but usually, I left the coding to the experts. I’d design in Photoshop, Illustrator, Sketch, or Figma and then hand off to a developer. Until recently, with rebuilding this site a couple of times and working on a Severance fan project.

A couple weeks ago, as an experiment, I pointed Claude Code at our BuildOps design system repo and asked it to generate a screen using our components. It worked after about three prompts. Not one-shotted, but close. I sat there looking at a functioning UI—built from our actual components—and realized I’d just skipped the entire part of my job that I’ve spent many years doing: drawing pictures of apps and websites in a design tool, then handing them to someone else to build.

That moment crystallized something I’d been circling all last year. I wrote last spring about how execution skills were being commoditized and the designer’s value was shifting toward taste and strategic direction. A month later I mapped out a timeline for how design systems would become the infrastructure that AI tools generate against—prompt, generate, deploy. That was ten months ago, and most of it is already happening. Product design is changing. Not in the way most people are talking about it, but in a way that’s more fundamental and more interesting.

What’s Next in Vertical SaaS

After posting my essay about Wall Street and the B2B software stocks tumbling, I came across a few items that pulls on the thread even more, to something forward-looking.

Firstly, my old colleague Shawn Smith had a more nuanced reaction to the story. Smith has been both a customer many times over of Salesforce and a product manager there.

On the customer side, without exception, the sentiment was that Salesforce is an expensive partial solution. There were always gaps in what it could do, which were filled by janky workarounds. In every case, the organization at least considered building an in-house solution which would cover all the bases *and* cost less than the Salesforce contract. I think the threat of AI to Salesforce is very real in this sense. Companies will use it to build their own solutions, but this outcome is probably at least 2-5 years out in many cases because switching costs are real, and contracts are an obstacle.

He is less convinced about something like Adobe where individual preferences around tooling are more of the determining factor. The underlying threat in Smith’s analysis—that companies will build their own solutions—points to a deeper question about which software businesses have real moats. Especially with newer, AI-native upstarts.

Anthropic published a study that puts numbers to something I’ve been writing about in the design context for a while now. They ran a randomized controlled trial with 52 junior software engineers learning a new Python library. Half used AI assistance. Half coded by hand.

Judy Hanwen Shen and Alex Tamkin, writing for Anthropic Research:

Participants in the AI group scored 17% lower than those who coded by hand, or the equivalent of nearly two letter grades. Using AI sped up the task slightly, but this didn’t reach the threshold of statistical significance.

So the AI group didn’t finish meaningfully faster, but they understood meaningfully less. And the biggest gap was in debugging—the ability to recognize when code is wrong and figure out why. That’s the exact skill you need most when your job is to oversee AI-generated output.

The largest gap in scores between the two groups was on debugging questions, suggesting that the ability to understand when code is incorrect and why it fails may be a particular area of concern if AI impedes coding development.

This is the same dynamic I fear in design. When I wrote about the design talent crisis, educators like Eric Heiman told me “we internalize so much by doing things slower… learning through tinkering with our process, and making mistakes.” Bradford Prairie put it more bluntly: “If there’s one thing that AI can’t replace, it’s your sense of discernment for what is good and what is not good.” But discernment comes from reps, and AI is eating the reps.

The honest framing from Anthropic’s own researchers:

It is possible that AI both accelerates productivity on well-developed skills and hinders the acquisition of new ones.

Credit to Anthropic for publishing research that complicates the case for their own product. And the study’s footnote is worth noting: they used a chat-based AI assistant, not an agentic tool like Claude Code. Their expectation is that “the impacts of such programs on skill development are likely to be more pronounced.”

I can certainly attest that when I use Claude Code, I have no idea what’s going on!

The one bright spot: not all AI use was equal. Participants who asked conceptual questions and used AI to check their understanding scored well. The ones who delegated code generation wholesale scored worst. The difference was whether you were thinking alongside the tool or letting it think for you.

Cognitive effort—and even getting painfully stuck—is likely important for fostering mastery.

Getting painfully stuck. That’s the apprenticeship. That’s the grunt work. And it’s exactly what we’re optimizing away.

Stylized hand pointing to a white sheet with three horizontal rows of black connected dots on a beige background.

How AI assistance impacts the formation of coding skills

Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems.

anthropic.com iconanthropic.com

I recently spent some time to move my entire note-taking system away from Notion to Obsidian because the latter runs on Markdown files, which are text files. Why? Because AI runs on text.

And that is also the argument from Patrick Morgan. Your notes, your documented processes, your collected examples of what “good” looks like—if those live in plain text, AI can actually work with them. If they live in your head, or scattered across tools that don’t export, they’re invisible.

There’s a difference between having a fleeting conversation and collaborating on an asset you both work on. When your thinking lives in plain text — especially Markdown — it becomes legible not just to you, but to an AI that can read across hundreds of files, notice patterns, and act at scale.

I like that he frames this as scaffolding rather than some elaborate knowledge management system. He’s honest about the PKM fatigue most of us share:

Personal knowledge management is far from a new concept. Honestly, it’s a topic I started to ignore because too many people were trying to sell me on yet another “life changing” system. Even when I tried to jump through the hoops, it was all just too much for me for too little return. But now that’s changed. With AI, the value is much greater and the barrier to entry much lower. I don’t need an elaborate system. I just need to get my thinking in text so I can share it with my AI.

This is the part that matters for designers. We externalize visual thinking all the time—moodboards, style tiles, component libraries. But we rarely externalize the reasoning behind those decisions in a format that’s portable and machine-readable. Why did we choose that pattern? What were we reacting against? What does “good” look like for this particular problem?

Morgan’s practical recommendation is dead simple: three markdown files. One for process, one for taste, one for raw thinking. That’s it.

This is how your private thinking becomes shared context.

The designers who start doing this now will have documented judgment that AI can actually use.

Side profile of a woman's face merged with a vintage keyboard and monitor displaying a black-and-white mountain photo in an abstract geometric collage.

AI Runs on Text. So Should You.

Where human thinking and AI capability naturally meet

open.substack.com iconopen.substack.com

Everyone wants to talk about the AI use case. Nobody wants to talk about the work that makes the use case possible.

Erika Flowers, who led NASA’s AI readiness initiative, has a great metaphor for this on the Invisible Machines podcast. Her family builds houses, and before they could install a high-tech steel roof, they spent a week building scaffolding, setting up tarps, rigging safety harnesses, positioning dumpsters for debris. The scaffolding wasn’t the job. But without it, the job couldn’t happen.

Flowers on where most organizations are with AI right now:

We are trying to just climb up on these roofs with our most high tech pneumatic nail gun and we got all these tools and stuff and we haven’t clipped off to our belay gear. We don’t have the scaffolding set up. We don’t have the tarps and the dumpsters to catch all the debris. We just want to get up there. That is the state of AI and transformation.

The scaffolding is the boring stuff: data integration, governance, connected workflows, organizational readiness. It’s context engineering at the enterprise level. Before any AI feature can do real work, someone has to make sure it has the right data, the right permissions, and the right place in a process. Nobody wants to fund that part.

But Flowers goes further. She argues we’re not just skipping the scaffolding—we’re automating the wrong things entirely. Her example: accounting software uses AI to help you build a spreadsheet faster, then you email it to someone who extracts the one number they actually needed. Why not just ask the AI for the number? We’re using new technology to speed up old workflows instead of asking whether the workflow should exist at all.

Then she gets to the interesting question—who’s supposed to design all of this?

I don’t think it exists necessarily with the roles that we have. It’s going to be a lot closer to Hollywood… producer, director, screenwriter. And I don’t mean as metaphors, I mean literally those people and how they think and how they do it because we’re in a post software era.

She lists therapists, psychologists, wedding planners, dance choreographers. People who know how to choreograph human interactions without predetermined inputs. That’s a different skill set than designing screens, and I think she’s onto something.

Why AI Scaffolding Matters More than Use Cases ft Erika Flowers

We’re in a moment when organizations are approaching agentic AI backwards, chasing flashy use cases instead of building the scaffolding that makes AI agents actually work at scale. Erika Flowers, who led NASA’s AI Readiness Initiative and has advised Meta, Google, Netflix, and Intuit, joins Robb and Josh for a frank and funny conversation about what’s broken in enterprise AI adoption. She dismantles the myth of the “big sexy AI use case” and explains why most AI projects fail before they start. The trio makes the case that we’re entering a post-software world, whether organizations are ready or not. Chapters - 0:09 - NASA AI Readiness Explained | Erica Flowers on Agentic AI & Runtimes 1:48 - Why the “Big Sexy AI Use Case” Is a Lie 2:42 - AI Didn’t Start with ChatGPT: What NASA Has Been Doing for 30 Years 4:24 - Why AI Runtimes Matter More Than Any Single Use Case 5:21 - The Hidden AI Problem: Legacy Data, Silos & Organizational Reality 7:13 - The Boring AI That Actually Works (And Why Enterprises Ignore It) 8:10 - The AI Arms Race Nobody Understands 9:22 - AI Scaffolding Explained: The Metaphor Every Leader Needs to Hear 12:12 - AI Readiness Is Cultural Change, Not Just Technology 14:38 - From Parking Lots to Companies: How Simple AI Agents Quietly Scale 17:01 - Why Most AI Features Feel Useless in Real Products 19:08 - Stop Automating Spreadsheets: Ask AI the Question Instead 25:06 - The Post-Software Era: Why Designers Aren’t Enough Anymore 28:33 - UI Is a Medium: How AI Will Absorb Interfaces Entirely 46:24 - Infinite Content, Human Creativity, and the Future After AI Listen and Check out Erika’s podcast, “Flower Power Hour”: https://open.spotify.com/show/15BTSl9fWiH3QTmVAYj6Fd Learn more about Erika at www.helloerikaflowers.com/ ---------- Support our show by supporting our sponsors! This episode is supported by OneReach.ai Forged over a decade of R&D and proven in 10,000+ deployments, OneReach.ai’s GSX is the first complete AI agent runtime environment (circa 2019) — a hardened AI agent architecture for enterprise control and scale. Backed by UC Berkeley, recognized by Gartner, and trusted across highly regulated industries, including healthcare, finance, government and telecommunications. A complete system for accelerating AI adoption - design, train, test, deploy, monitor, and orchestrate neurosymbolic applications (agents). Use any AI models - Build and deploy intelligent agents fast - Create guardrails for organizational alignment - Enterprise-grade security and governance Request free prototype: https://onereach.ai/prototype/?utm_source=youtube&utm_medium=social&utm_campaign=podcast_s6e12&utm_content=1 ---------- The revised and significantly updated second edition of our bestselling book about succeeding with AI agents, Age of Invisible Machines, is available everywhere: Amazon — https://bit.ly/4hwX0a5 #InvisibleMachines #Podcast #TechPodcast #AIPodcast #AI #AgenticAI #AIAgents #DigitalTransformation #AIReadiness #AIDeployment #AISoftware #AITransformation #AIAdoption #AIProjects #NASA #AgentRuntime #Innovation #AIUseCase

youtu.be iconyoutu.be

Every few months a new AI term drops and everyone scrambles to sound smart about it. Context engineering. RAG. Agent memory. MCP.

Tal Raviv and Aman Khan, writing for Lenny’s Newsletter, built an interactive piece that has you learn these concepts by doing them inside Cursor. It’s part article, part hands-on tutorial. But the best parts are when they strip the terms down to what they actually are:

Let that sink in: memory is just a text file prepended to every conversation. There’s no magic here.

That’s it. Agent memory, the thing that sounds like science fiction, is a text file that gets pasted at the top of every chat. Once you know that, you can design for it. You can think about what belongs in that file and what doesn’t, what’s worth the context window space and what’s noise.

They do the same with RAG:

RAG is a fancy term for “Before I start talking, I gotta go look everything up and read it first.” Despite the technical name, you’ve been doing it your whole life. Before answering a hard question, you look things up. Agents do the same.

Tool calling gets the same treatment. The agent reads a file, decides what to change, and uses a tool to make the edit. As Raviv and Khan point out, you’ve done search-and-replace in Word a hundred times.

Their conclusion ties it together:

Cursor is just an AI product like any other, composed of text, tools, and results flowing back into more text—except Cursor runs locally on our computer, so we can watch it work and learn. Once we were able to break down any AI product into these same building blocks, our AI product sense came naturally.

This matters for designers. You can’t design well for systems you don’t understand, and you can’t understand systems buried under layers of jargon. The moment someone tells you “memory is just a text file,” you can start asking the right design questions: what goes in it? Who controls it? How does the user know it’s working?

The whole piece is a step-by-step tutorial for PMs, but the underlying lesson is universal. Strip the mystique, see the mechanics, design for what’s actually there.

Two smiling illustrated men with orange watercolor background, caption "How to build" and highlighted text "AI product sense".

How to build AI product sense

The secret is using Cursor for non-technical work (inside: 75 free days of Cursor Pro to try this out!)

open.substack.com iconopen.substack.com

Daniel Miessler pulls an idea from a recent Karpathy interview that’s been rattling around in my head since I read it:

Humans collapse during the course of their lives. Children haven’t overfit yet. They will say stuff that will shock you because they’re not yet collapsed. But we [adults] are collapsed. We end up revisiting the same thoughts, we end up saying more and more of the same stuff, the learning rates go down, the collapse continues to get worse, and then everything deteriorates.

Miessler’s description of what this looks like in practice is uncomfortable:

How many older people do you know who tell the same stories and jokes over and over? Watch the same shows. Listen to the same five bands, and then eventually two. Their aperture slowly shrinks until they die.

I’ve seen this in designers. The ones who peaked early and never pushed past what worked for them. Their work from five years ago looks exactly like their work today. Same layouts, same patterns, same instincts applied to every problem regardless of context. They collapsed and didn’t notice.

Then Miessler, almost in passing:

This was a problem before AI. And now many are delegating even more of their thinking to a system that learns by crunching mediocrity from the internet. I can see things getting significantly worse.

If collapse is what happens when you stop seeking new inputs, then outsourcing your thinking to AI is collapse on fast-forward. You’re not building pattern recognition, you’re borrowing someone else’s average. The outputs look competent. They pass a first glance. But nothing in there surprises anyone, because the model optimizes for the most statistically probable next token.

Use AI to accelerate execution, not to replace the part where you actually have an idea.

Childhood → reading/exposure/tools/comedy → Renewal → Sustained Vitality. Side: Adult Collapse (danger: low entropy, repetition).

Humans Need Entropy

On Karpathy

danielmiessler.com icondanielmiessler.com
Floating 3D jigsaw puzzle piece with smooth blue-to-orange gradient and speckled texture on a deep blue background.

What Wall Street Gets Wrong About SaaS

Last week, B2B software companies tumbled in the stock market, dropping over 10%. Software stocks have been trending down since September 2025, now down 30% according to the IGV software index. The prevailing sentiment is because AI tools like Anthropic’s Claude are now capable of doing things companies used to pay thousands of dollars for.

Chip Cutter and Sebastian Herrara, writing in the Wall Street Journal:

The immediate catalyst for this week’s selloff was the release of new capabilities for Anthropic’s Claude Cowork, an AI assistant that lets users assign agents to perform many types of tasks on their computers using only natural-language prompts. The tools automate workflows and perform tasks across a gamut of job functions with little human input.

The new plug-ins released about a week ago can review legal contracts and perform other industry-specific functions. An update to its model Thursday enhanced capabilities for financial analysis. 

I recall being in my childhood home in San Francisco, staring at the nine-inch monochrome screen on my Mac, clicking square zoning tiles, building roads, and averting disasters late into the night. Yes, that was SimCity in 1989. I’d go on to play pretty much every version thereafter, though the mobile one isn’t quite the same.

Anyhow, Andy Coenen, a software engineer at Google Brain, decided to build a SimCity version of New York as a way to learn some of the newer gen AI models and tools:

Growing up, I played a lot of video games, and my favorites were world building games like SimCity 2000 and Rollercoaster Tycoon. As a core millennial rapidly approaching middle age, I’m a sucker for the nostalgic vibes of those late 90s / early 2000s games. As I stared out at the city, I couldn’t help but imagine what it would look like in the style of those childhood memories.

So here’s the idea: I’m going to make a giant isometric pixel-art map of New York City. And I’m going to use it as an excuse to push hard on the limits of the latest and greatest generative models and coding agents.

Best case scenario, I’ll make something cool, and worst case scenario, I’ll learn a lot.

The writeup goes deep into the technical process—real NYC city data, fine-tuned image models, custom generation pipelines, and a lot of manual QA when the models couldn’t get water and trees right. Worth reading in full if you’re curious. But his conclusion on what AI means for creative work is where I want to focus.

Coenen on drudgery:

…So much of creative work is defined by this kind of tedious grind.

For example, [as a musician] after recording a multi-part vocal harmony you change something in the mix and now it feels like one of the phrases is off by 15 milliseconds. To fix it, you need to adjust every layer - and this gets more convoluted if you’re using plugins or other processing on the material.

This isn’t creative. It’s just a slog. Every creative field - animation, video, software - is full of these tedious tasks. Of course, there’s a case to be made that the very act of doing this manual work is what refines your instincts - but I think it’s more of a “Just So” story than anything else. In the end, the quality of art is defined by the quality of your decisions - how much work you put into something is just a proxy for how much you care and how much you have to say.

I’d push back slightly on the “Just So story” part—repetition does build instincts that are hard to shortcut. But the broader point holds. And his closer echoes my own sentiment after finishing a massive gen AI project:

If you can push a button and get content, then that content is a commodity. Its value is next to zero.

Counterintuitively, that’s my biggest reason to be optimistic about AI and creativity. When hard parts become easy, the differentiator becomes love.

Check out Coenen’s project here. I think the only thing that’s missing are animated cars on the road.

Bonus: If you’re like me or Andy Coenen and loved SimCity, there’s an online free and open-source game called IsoCity that you can play. Runs natively in-browser.

Isometric pixel-art NYC skyline showing dense skyscrapers, streets, a small park, riverside and a UI title bar with mini-map.

isometric-nyc

cannoneyed.com iconcannoneyed.com

Correlation does not equal causation. How many times have we heard that mantra? Back in 2014, Tyler Vigen produced some charts that brought together two curves from entirely two different unrelated sources, like “People who drowned after falling out of a fishing boat correlates with Marriage rate in Kentucky” or “Number of people who were electrocuted by power lines correlates with Marriage rate in Alabama.”

Ten years later, in January 2024, Vigen revamped his Spurious Correlations collection:

In January 2024, I released a big update to the project based on user feedback. I added 25,000 new variables, improved and expanded the discover feature, and added a sprinkle of GenAI (including spurious scholar).

Now every crazy non-causal—but maybe plausible?—correlation is accompanied by an AI-generated illustration, explanation, and “research” paper. For example, in “The number of dietetic technicians in North Carolina correlates with Viewership count for Days of Our Lives,” the AI explanation is:

The shortage led to a lack of food-related subplots and characters, making the show less engaging for food enthusiasts.

Click the random button a few times to get some laughs.

Chart showing searches for 'that is sus' (black) and Lululemon stock (red) both surge after 2020, peaking around 2022–2023.

Spurious Correlations

Correlation is not causation: thousands of charts of real data showing actual correlations between ridiculous variables.

tylervigen.com icontylervigen.com

If building is cheap and the real bottleneck is knowing what to build, interface design faces the same squeeze. Nielsen Norman Group’s annual State of UX report argues that UI is no longer a differentiator.

Kate Moran, Raluca Budiu, and Sarah Gibbons, writing for Nielsen Norman Group:

UI is still important, but it’ll gradually become less of a differentiator. Equating UX with UI today doesn’t just mislabel our work — it can lead to the mistaken conclusion that UX is becoming irrelevant, simply because the interface is becoming less central.

Design systems standardized the components. AI-mediated interactions now sit on top of the interface itself. The screen matters less when users talk to an agent instead of navigating pages. The report lays out where that leaves designers:

As AI-powered design tools improve, the power of standardization will be amplified and anyone will be able to make a decent-looking UI (at least from a distance). If you’re just slapping together components from a design system, you’re already replaceable by AI. What isn’t easy to automate? Curated taste, research-informed contextual understanding, critical thinking, and careful judgment.

The whole report is worth reading. The thread through all of it—job market, AI fatigue, UI commodification—is that surface-level work won’t survive leaner teams and stronger scrutiny. The value is in depth.

State of UX 2026: Design Deeper to Differentiate headline, NN/g logo, red roller-coaster with stick-figure riders flying off a loop.

State of UX in 2026

UX faced instability from layoffs, hiring freezes, and AI hype; now, the field is stabilizing, but differentiation and business impact are vital.

nngroup.com iconnngroup.com

Last September I wrote about why we still need a HyperCard for the AI era—a tool that’s accessible but controllable, that lets everyday people build and share software without needing to be developers. John Allsopp sees the demand side of that equation already arriving.

Writing on LinkedIn, he starts with his 13-year-old daughter sending him a link to Aippy, a platform where people create, share, and remix apps like TikTok videos. It already has thousands of apps on it:

Millions of people who have never written a line of code are starting to build applications — not scripts or simple automations, but genuine applications with interfaces and logic and persistence.

The shift Allsopp describes isn’t just about who’s building. It’s about how software spreads:

This pattern — creation, casual sharing, organic spread — looks a lot more like how content moves on TikTok or Instagram than how apps move through the App Store. Software becomes something you make and share, and remix. Not something you publish and sell. It surfaces through social connections and social discovery, not through store listings and search rankings.

And the platforms we have aren’t built for it. Allsopp points out that the appliance model Apple introduced in 2007 made sense for an audience that was intimidated by technology. That audience grew up:

The platforms designed to protect users from complexity are now protecting users from their own creativity and that of their peers.

This is the world I was writing about in “Why We Still Need a HyperCard for the AI Era.” I argued for tools with direct manipulation, technical abstraction, and local distribution—ingredients HyperCard had that current AI coding tools still miss. Allsopp is describing the audience those tools need to serve. The gap between the two is where the opportunity sits.

Article: Here Comes Everybody (Again) — John Allsopp / 27th January, 2026

Here Comes Everybody (Again)

Clay Shirky’s Here Comes Everybody (2008) was about the democratisation of coordination…what happens when everybody builds. Shirky’s vision of a world where “people are given the tools to do things together, without needing traditional organizational structures” didn’t pan out quite as optimisticall

linkedin.com iconlinkedin.com

Earlier I linked to Hardik Pandya’s piece on invisible work—the coordination, the docs, the one-on-ones that hold projects together but never show up in a performance review. Designers have their own version of this problem, and it’s getting worse.

Kai Wong, writing in his Data and Design Substack, puts it plainly. A design manager he interviewed told him:

“It’s always been a really hard thing for design to attribute their hard work to revenue… You can make the most amazingly satisfying user experience. But if you’re not bringing in any revenue out of that, you’re not going to have a job for very much longer. The company’s not going to succeed.”

That’s always been true, but AI made it urgent. When a PM can generate something that “looks okay” using an AI tool, the question is obvious: what do we need designers for? Wong’s answer is the strategic work—research, translation between user needs and business goals. The trouble is that this work is the hardest to see.

Wong’s practical advice is to stop presenting design decisions in design terms. Instead of explaining that Option A follows the Gestalt principle of proximity, say this:

“Option A reduces checkout from 5 to 3 steps, making it much easier for users to complete their purchase instead of abandoning their cart.”

You’re not asking “which looks better?” You’re showing that you understand the business problem and the user problem, and can predict outcomes based on behavioral patterns.

I left a comment on this article when it came out, asking how these techniques translate at the leadership level. It’s one thing to help individual designers frame their work in business terms. It’s another to make an entire design org’s contribution legible to the rest of the company. Product management talks to customers and GTM teams. Engineering delivers features. Design is in the messy middle making sense of it all—and that sense-making is exactly the kind of invisible work that’s hardest to put on a slide.

Figure draped in a white sheet like a ghost wearing dark sunglasses, standing among leafy shrubs with one hand visible.

Designers often do invisible work that matters. Here’s how to show it

What matters in an AI-integrated UX department? Highlighting invisible work

open.substack.com iconopen.substack.com

What happens to a designer when the tool starts doing the thinking? Yaheng Li poses this question in his MFA thesis, “Different Ways of Seeing.” The CCA grad published a writeup about his project in Slanted, explaining that he drew on embodiment research to make a point about how tools change who we are:

Whether they are tools, toys, or mirror reflections, external objects temporarily become part of who we are all the time. When I put my eyeglasses on, I am a being with 20/20 vision, not because my body can do that it can’t, but because my body-with-augmented-vision-hardware can.

The eyeglasses example is simple but the logic extends further than you’d expect. Li takes it to the smartphone:

When you hold your smartphone in your hand, it’s not just the morphological computation happening at the surface of your skin that becomes part of who you are. As long as you have Wi-Fi or a phone signal, the information available all over the internet (both true and false information, real news and fabricated lies) is literally at your fingertips. Even when you’re not directly accessing it, the immediate availability of that vast maelstrom of information makes it part of who you are, lies and all. Be careful with that.

Now apply that same logic to a designer sitting in front of an AI tool. If the tool becomes an extension of the self, and the tool is doing the visual thinking and layout generation, what does the designer become? Li’s thesis argues that graphic design shapes perception, that it acts as “a form of visual poetry that can convey complex ideas and evoke emotional responses, thus influencing cognitive and cultural shifts.” If that’s true, and I think it is, then the tool the designer uses to make that poetry is shaping the poetry itself.

This is a philosophical piece, not a practical one. But the underlying question is practical for anyone designing with AI right now: if your tools become part of who you are, you should care a great deal about what those tools are doing to your thinking.

Left spread: cream page with text "DIFFERENT WAYS OF SEEING" and "A VISUAL NARRATIVE". Right spread: green hill under blue sky with two cows and a sheep.

Different Ways of Seeing

When I was a child, I once fell ill with a fever and felt as...

slanted.de iconslanted.de

For as long as I’ve been in startups, execution speed has been the thing teams optimized for. The assumption was always that if you could just build faster, you’d win. That’s your moat. AI has mostly delivered on that promise—teams can now ship in weeks—see Claude Cowork—what used to take months. And the result is that a lot of teams are building the wrong things faster than ever.

Gale Robins, writing for UX Collective, opens with a scene I’ve lived through from both sides of the table:

I watched a talented software team present three major features they’d shipped on time, hitting all velocity metrics. When I asked, “What problem do these features solve?” silence followed. They could describe what they’d built and how they’d built it. But they couldn’t articulate why any of it mattered to customers.

Robins argues that judgment has replaced execution as the real constraint on product teams. And AI is making this worse, not better:

What once took six months of misguided effort now takes six weeks, or with AI, six days.

Six days to build the wrong thing. The build cycle compressed but the thinking didn’t. Teams are still skipping the same discovery steps, still assuming they know what users want. They’re just doing it at a pace that makes the waste harder to catch.

Robins again:

AI doesn’t make bad judgment cheaper or less damaging — it just accelerates how quickly those judgment errors compound.

She illustrates this with a cascade example: a SaaS company interviews only enterprise clients despite SMBs making up 70% of revenue. That one bad call—who to talk to—ripples through problem framing, solution design, feature prioritization, and evidence interpretation, costing $315K over ten months. With AI-accelerated development, the same cascade plays out in five months at the same cost. You just fail twice as fast.

The article goes on to map 19 specific judgment points across the product discovery process. The framework itself is worth a read, but the underlying argument is the part I keep coming back to: as execution gets cheaper, the quality of your decisions is the only thing that scales.

Circle split in half: left teal circuit-board lines with tech icons, right orange hands pointing to a central flowchart.

The anatomy of product discovery judgment

The 19 critical decision moments where human judgment determines whether teams build the right things.

uxdesign.cc iconuxdesign.cc

I’ve watched this pattern play out more times than I can count: a team ships something genuinely better and users ignore it. They go back to the old thing. The spreadsheet. The manual process. And the team concludes that users “resist change,” which is the wrong diagnosis.

Tushar Deshmukh, writing for UX Magazine, frames it well:

Many teams assume users dislike change. In reality, users dislike cognitive disruption.

Deshmukh describes an enterprise team that built a predictive dashboard with dynamic tiles, smart filters, and smooth animations. It failed. Employees skipped it and went straight to the basic list view:

Not because the dashboard was bad. But because it disrupted 20 years of cognitive routine. The brain trusted the old list more than the new intelligence. When we merged both—familiar list first, followed by predictive insights—usage soared.

He tells a similar story about a logistics company that built an AI-powered route planner. Technically superior, visually polished, low adoption. Drivers had spent years building mental models around compass orientation, landmarks, and habitual map-reading patterns:

The AI’s “optimal route” felt psychologically incorrect. It was not wrong—it was unfamiliar. We added a simple “traditional route overlay,” showing older route patterns first. The AI suggestion was then followed as an enhancement. Adoption didn’t just improve—trust increased dramatically.

The fix was the same in both cases: layer the new on top of the familiar. Don’t replace the mental model—extend it. This is something I think about constantly as my team designs AI features into our product. The temptation is always to lead with the impressive new capability. But if users can’t find their footing in the interface, the capability doesn’t matter. Familiarity is the on-ramp.

Neon head outline with glowing brain and ghosted silhouettes on black; overlaid text: "UX doesn't begin when users see your interface. It begins with what their minds expect to see.

The Cortex-First Approach: Why UX Starts Before the Screen

The moment your interface loads, the user experience is already halfway over, shaped by years of digital memories, unconscious biases, and mental models formed long before they arrived. Most products fail not because of bad design, but because they violate the psychological expectations users can’t even articulate. This is the Cortex-First approach: understanding that great UX begins in the mind, where emotion and familiarity decide whether users flow effortlessly or abandon in silent frustration.

uxmag.com iconuxmag.com

Fitts’s Law is one of those design principles everyone learns in school and then quietly stops thinking about. Target size, target distance, movement time. It’s a mouse-and-cursor concept, and once you’ve internalized the basics—make buttons big, put them close—it fades into the background. But with AI and voice becoming primary interaction models, the principle matters again. The friction just moved.

Julian Scaff, writing for Bootcamp, traces Fitts’s Law from desktop GUIs through touch, spatial computing, voice, and neural interfaces. His argument is that the law didn’t become obsolete—it became metaphorical:

With voice interfaces, the notion of physical distance disappears altogether, yet the underlying cognitive pattern persists. When a user says, “Turn off the lights,” there’s no target to touch or point at, but there is still a form of interaction distance, the mental and temporal gap between intention and response. Misrecognition, latency, or unclear feedback increase this gap, introducing friction analogous to a small or distant button.

“Friction analogous to a small or distant button” is a useful way to think about what’s happening with AI interfaces right now. When a user stares at a blank text field and doesn’t know what to type, that’s distance. When an agent misinterprets a prompt and the user has to rephrase three times, that’s a tiny target. The physics changed but the math didn’t.

Scaff extends this into AI and neural interfaces, where the friction gets even harder to see:

Every layer of mediation, from neural decoding errors to AI misinterpretations, adds new forms of interaction friction. The task for designers will be to minimize these invisible distances, not spatial or manual, but semantic and affective, so that the path from intention to effect feels seamless, trustworthy, and humane.

He then describes what he calls a “semantic interface,” one that interprets intent rather than waiting for explicit commands:

A semantic interface understands the why behind a user’s action, interpreting intent through context, language, and behavior rather than waiting for explicit commands. It bridges gaps in understanding by aligning system logic with human mental models, anticipating needs, and communicating in ways that feel natural and legible.

This connects to the current conversation about AI UX. The teams building chatbot-first products are, in Fitts’s terms, forcing users to cross enormous distances with tiny targets. Every blank prompt field with no guidance is a violation of the same principle that tells you to make a button bigger. We’ve known this for seventy years. We’re just ignoring it because the interface looks new.

Collage of UIs: vintage monochrome OS, classic Windows, modern Windows tiles and macOS dock, plus smartphone gesture demos

The shortest path from thought to action

Reassessing Fitts’ Law in the age of multimodal interfaces

medium.com iconmedium.com

For years, the thing that made designers valuable was the thing that was hardest to fake: the ability to look at a spreadsheet of requirements and turn it into something visual that made sense. That skill got people hired and got them a seat at the table. And now a PM with access to Lovable or Figma Make can produce something that looks close enough to pass.

Kai Wong interviewed 22 design leaders and heard the same thing from multiple directions. One Global UX Director described the moment it clicked for his team:

“A designer on my team had a Miro session with a PM — wireframes, sketches, the usual. Then the PM went to Stitch by Google and created designs that looked pretty good. To an untrained eye, it looked finished. It obviously worried the team.”

It should worry teams. Not because the PM did anything wrong, but because designers aren’t always starting from a blank canvas anymore. They’re inheriting AI-generated drafts from people who don’t know what’s wrong with them.

Wong puts the commoditization bluntly:

Our superpower hasn’t been taken away: it’s more like anyone can buy something similar at the store.

The skill isn’t gone. It’s just no longer rare enough to carry your career on its own. What fills the gap, Wong argues, is the ability to articulate why—why this layout works, why that one doesn’t. One CEO he interviewed put it this way:

“I want the person who’s designing the thing from the start to understand the full business context.”

This resonates with me as a design leader. The designers on my teams who are hardest to replace are the ones who can walk into a room and explain why something needs to change, and tie that explanation to a user need or a business outcome. AI can’t do that yet. And the people generating those 90%-done drafts definitely can’t.

Hiker in blue shirt and cap standing on a rocky cliff edge, looking out over a sunlit forested valley and distant mountains

The 90% Problem: Why other’s AI’s designs may become your problem

The unfortunate reality of how many companies use AI

dataanddesign.substack.com icondataanddesign.substack.com

Every few years, the industry latches onto an interaction paradigm and tries to make it the answer to everything. A decade ago it was “make it an app.” Now it’s “just make it a chat.” The chatbot-as-default impulse is strong right now, and it’s leading teams to ship worse experiences than what they’re replacing.

Katya Korovkina, writing for UX Collective, calls this “chatbot-first thinking” and lays out a convincing case for why it’s a trap:

Many of the tasks we deal with in our personal life and at work require rich, multi-modal interaction patterns that conversational interfaces simply cannot support.

She walks through a series of validating questions product teams should ask before defaulting to a conversational UI, and the one that stuck with me is about discoverability. The food ordering example is a good one—if you don’t know what you want, listening to a menu read aloud is objectively worse than scanning one visually. But the real issue is who chat-first interfaces actually serve:

Prompt-based products work best for the users who already know how to ask the right question.

Jakob Nielsen has written about this as the “articulation barrier,” and Korovkina cites the stat that nearly half the population in wealthy countries struggles with complex texts. We’re building interfaces that require clear, precise written communication from people who don’t have that skill. And we’re acting like that’s fine because the technology is impressive.

Korovkina also makes a practical point that gets overlooked. She describes using a ChatGPT agent to get a YouTube transcript — a task that takes four clicks with a dedicated tool — and watching the agent spend minutes crawling the web, hitting paywalls, and retrying failures:

When an LLM agent spends five minutes crawling the web, calling tools, retrying failures, reasoning through intermediate steps, it is running on energy-intensive infrastructure, contributing to real data-center load, energy usage, and CO₂ emissions. For a task that could be solved with less energy by a specialised service, this is computational overkill.

The question she lands on—“was AI the right tool for this task at all?”—is the one product teams keep skipping. Sometimes a button, a dropdown, and a confirmation screen is the better answer.

Centered chat window with speech-bubble icon and text "How can I help you today?" plus a message input field; faded dashboard windows behind

Are we doing UX for AI the right way?

How chatbot-first thinking makes products harder for users

uxdesign.cc iconuxdesign.cc
Purple lobster with raised claws on a lit wooden platform in an underwater cave, surrounded by smaller crabs, coral and lanterns

OpenClaw and the Agentic Future

Last week an autonomous AI agent named OpenClaw (fka Clawd, fka Moltbot) took the tech community by storm, including a run on Mac minis as enthusiasts snapped them up to host OpenClaw 24/7. In case you’re not familiar, the app is a mostly unrestricted AI agent that lives and runs on your local machine or on a server—self-hosted, homelab, or otherwise. What can it do? You can connect it to your Google accounts, social media accounts, and others and it can act as your pretty capable AI assistant. It can even code its own capabilities. You chat with it through any number of familiar chat apps like Slack, Telegram, WhatsApp, and even iMessage.

Federico Viticci, writing in MacStories:

To say that Clawdbot has fundamentally altered my perspective of what it means to have an intelligent, personal AI assistant in 2026 would be an understatement. I’ve been playing around with Clawdbot so much, I’ve burned through 180 million tokens on the Anthropic API (yikes), and I’ve had fewer and fewer conversations with the “regular” Claude and ChatGPT apps in the process. Don’t get me wrong: Clawdbot is a nerdy project, a tinkerer’s laboratory that is not poised to overtake the popularity of consumer LLMs any time soon. Still, Clawdbot points at a fascinating future for digital assistants, and it’s exactly the kind of bleeding-edge project that MacStories readers will appreciate.

Google’s design team is working on a hard problem: how do you create a visual identity for AI? It’s not a button or a menu. It doesn’t have a fixed set of functions. It’s a conversation partner that can do… well, a lot of things. That ambiguity is difficult to represent.

Daniel John, writing for Creative Bloq, reports on Google’s recent blog post about Gemini’s visual design:

“Consider designer Susan Kare, who pioneered the original Macintosh interface. Her icons weren’t just pixels; they were bridges between human understanding and machine logic. Gemini faces a similar challenge around accessibility, visibility, and alleviating potential concerns. What is Gemini’s equivalent of Kare’s smiling computer face?”

That’s a great question. Kare’s work on the original Mac made the computer feel approachable at a moment when most people had never touched one. She gave the machine a personality through icons that communicated function and friendliness at the same time. AI needs something similar: a visual language that builds trust while honestly representing what the technology can do.

Google’s answer? Gradients. They offer “an amorphous, adaptable approach,” one that “inspires a sense of discoverability.”

They think they’ve nailed it. I don’t think they did.

To their credit, Google seems to sense the comparison is a stretch. John quotes the Google blog again:

“Gradients might be much more about energy than ‘objectness,’ like Kare’s illustrations (a trash can is a thing, a gradient is a vibe), but they infuse a spirit and directionality into Gemini.”

Kare’s icons worked because they mapped to concrete actions and mental models people already had. A trash can means delete. A folder means storage. A smiling Mac means this thing is friendly and working. Gradients don’t map to anything. They just look nice. They’re aesthetic, not communicative. John’s word to describe them, “vibe” is right. Will a user pick up on the subtleties of a concentrated gradient versus a diffuse one?

The design challenge Google identified is real. But gradients aren’t the Kare equivalent. They’re not ownable nor iconic (pun intended). They’re a placeholder until someone figures out what is.

Rounded four-point rainbow-gradient star on left and black pixel-art vintage Macintosh-style computer with smiling face on right.

Did Google really just compare its design to Apple?

For rival tech brands, Google and Apple have seemed awfully cosy lately. Earlier this month it was announced that, in a huge blow to OpenAI, Google’s Gemini will be powering the much awaited (and much delayed) enhanced Siri assistant on every iPhone. And now, Google has compared its UI design with that of Apple. Apple of 40 years ago, that is.

creativebloq.com iconcreativebloq.com

Brand guidelines have always been a compromise. You document the rules—colors, typography, spacing, logo usage—and hope people follow them. They don’t, or they follow the letter while missing the spirit. Every designer who’s inherited a brand system knows the drift: assets that are technically on-brand but feel wrong, or interpretations that stretch “flexibility” past recognition.

Luke Wroblewski is pointing at something different:

Design projects used to end when “final” assets were sent over to a client. If more assets were needed, the client would work with the same designer again or use brand guidelines to guide the work of others. But with today’s AI software development tools, there’s a third option: custom tools that create assets on demand, with brand guidelines encoded directly in.

The key word is encoded. Not documented. Not explained in a PDF that someone skims once. Built into software that enforces the rules automatically.

Wroblewski again:

So instead of handing over static assets and static guidelines, designers can deliver custom software. Tools that let clients create their own on-brand assets whenever they need them.

That is a super interesting way of looking at it.

He built a proof of concept—the LukeW Character Maker—where an LLM rewrites user requests to align with brand style before the image model generates anything. The guidelines aren’t a reference document; they’re guardrails in the code.

This isn’t purely theoretical. When Pentagram designed Performance.gov in 2024, they delivered a library of 1,500 AI-generated icons that any federal agency could use going forward. Paula Scher defended the approach by calling it “self-sustaining”—the deliverable wasn’t a fixed set of illustrations but a system that could produce more:

The problem that’s plagued government publishing is the inability to put together a program because of the interference of different people with different ideas. This solved that.

I think this is an interesting glimpse into the future. Brand guidelines might have software with them. I can even see a day where AI can generate new design system components based on guidelines.

Timeline showing three green construction-worker mascots growing larger from 2000 to 2006, final one with red hard hat reading a blueprint.

Design Tools Are The New Design Deliverables

Design projects used to end when “final” assets were sent over to a client. If more assets were needed, the client would work with the same designer again or us...

lukew.com iconlukew.com

I spent all of last week linking to articles that say designers need to be more strategic. I still stand by that. But that doesn’t mean we shouldn’t understand the technical side of things.

Benhur Senabathi, writing for UX Collective, shipped 3 apps and 15+ working prototypes in 2025 using Claude Code and Cursor. His takeaway:

I didn’t learn to code this year. I learned to orchestrate. The difference matters. Coding is about syntax. Orchestration is about intent, systems, and knowing what ‘done’ looks like. Designers have been doing that for years. The tools finally caught up.

The skills that make someone good at design—defining outcomes, anticipating edge cases, communicating intent to people who don’t share your context—are exactly what AI-assisted building requires.

Senabathi again:

Prompting well isn’t about knowing to code. It’s about articulating the ‘what’ and ‘why’ clearly enough that the AI can handle the ‘how.’

This echoes how Boris Cherny uses Claude Code. Cherny runs 10-15 parallel sessions, treating AI as capacity to orchestrate rather than a tool to use. Same insight, different vantage point: Cherny from engineering, Senabathi from design.

GitHub contributions heatmap reading "701 contributions in the last year" with Jan–Sep labels and varying green activity squares

Designers as agent orchestrators: what I learnt shipping with AI in 2025

Why shipping products matters in the age of AI and what designers can learn from it

uxdesign.cc iconuxdesign.cc

One of my favorite parts of shipping a product is finding out how people actually use it. Not how we intended them to use it—how they bend it, repurpose it, surprise us with it. That’s when you learn what you really built.

Karo Zieminski, writing for Product with Attitude, captures a great example of this in her breakdown of Anthropic’s Cowork launch. She quotes Anthropic engineer Boris Cherny:

Since we launched Claude Code, we saw people using it for all sorts of non-coding work: conducting vacation research, creating slide presentations, organizing emails, cancelling subscriptions, retrieving wedding photos from hard drives, tracking plant growth, and controlling ovens.

Controlling ovens. I love it. Users took a coding tool and turned it into a general-purpose assistant because that’s what they needed it to be.

Simon Willison had already spotted this:

Claude Code is a general agent disguised as a developer tool. What it really needs is a UI that doesn’t involve the terminal and a name that doesn’t scare away non-developers.

That’s exactly what Anthropic shipped in Cowork. Same engine, new packaging, name that doesn’t say “developers only.”

This is the beauty of what we do. Once you create something, it’s really up to users to show you how it should be used. Your job is to pay attention—and have the humility to build what the behavior is asking for, not what your roadmap says.

Cartoon girl with ponytail wearing an oversized graduation cap with yellow tassel, carrying books and walking while pointing ahead.

Anthropic Shipped Claude Cowork in 10 Days Using Its Own AI. Here’s Why That Changes Everything.

The acceleration that should make product leaders sit up.

open.substack.com iconopen.substack.com