Skip to content

32 posts tagged with “coding”

I’ve rebuilt my personal website more times than I can count. The tools and platforms change; the principle doesn’t: I own my content, and nobody gets to take it away. I have a Substack, but it’s a digest, a syndication channel. The canonical content lives on my site, on my domain. My website can’t be enshittified by anyone but me.

Henry Desroches makes the case through Ivan Illich’s Tools for Conviviality:

In his book Tools For Conviviality, technology philosopher and social critic Ivan Illich identifies these two critical moments, the optimistic arrival & the deadening industrialization, as watersheds of technological advent. Tools are first created to enhance our capacities to spend our energy more freely and in turn spend our days more freely, but as their industrialization increases, their manipulation & usurpation of society increases in tow.

Illich also describes the concept of radical monopoly, which is that point where a technological tool is so dominant that people are excluded from society unless they become its users. We saw this with the automobile, we saw it with the internet, and we even see it with social media.

That’s social media in one paragraph. You don’t join Instagram because you want to; you join because opting out means opting out of the conversation. Desroches argues personal websites are the answer:

Hand-coded, syndicated, and above all personal websites are exemplary: They let users of the internet to be autonomous, experiment, have ownership, learn, share, find god, find love, find purpose. Bespoke, endlessly tweaked, eternally redesigned, built-in-public, surprising UI and delightful UX. The personal website is a staunch undying answer to everything the corporate and industrial web has taken from us.

The practical argument is strong enough on its own. Own your content. Own your platform. Syndicate outward. The moment you frame it as reclaiming the soul of the internet, you lose the people who most need to hear the boring version: just put your stuff on a domain you control.

Headline "A website to destroy all websites." above a central dark horse etching; side caption: "How to win the war for the soul of the internet.

A Website To End All Websites

How to win the war for the soul of the internet, and build the Web We Want.

henry.codes iconhenry.codes
A red-crowned crane soaring over misty mountain waterfalls in a Japanese ink-wash style illustration with pink-blossomed trees and teal rocky cliffs.

Spec-Driven Development: It Looks Like Waterfall (And I Feel Fine)

We’ve been talking a lot about agentic engineering, how software is now getting built with AI. As I look to see how design can complement this new development paradigm, a newish methodology called spec-driven development caught my eye. The idea is straightforward: you write a detailed specification first, then AI agents generate the code from it. The specification becomes the source of truth, not the code.

My first reaction when I started reading about SDD was: wait, isn’t this just waterfall?

Seriously. You gather requirements. You write them down in a structured document. You hand that document to someone (or something) that builds to spec. That’s the waterfall pattern. We spent two decades running away from it, and now it’s back wearing a blue Patagonia vest and calling itself a methodology.

The instinct when working with AI agents is to write more. More instructions, more constraints. Turns out that’s exactly wrong.

Addy Osmani, writing for O’Reilly, digs into the research:

Research has confirmed what many devs anecdotally saw: as you pile on more instructions or data into the prompt, the model’s performance in adhering to each one drops significantly. One study dubbed this the “curse of instructions”, showing that even GPT-4 and Claude struggle when asked to satisfy many requirements simultaneously. In practical terms, if you present 10 bullet points of detailed rules, the AI might obey the first few and start overlooking others.

So the answer is a smarter spec, not a longer one. Osmani pulls from GitHub’s analysis of over 2,500 agent configuration files and finds that effective specs cover six areas: commands, testing, project structure, code style, git workflow, and boundaries.

The boundaries piece is worth lingering on. Osmani recommends a three-tier system:

Always do: Actions the agent should take without asking. “Always run tests before commits.” “Always follow the naming conventions in the style guide.”

Ask first: Actions that require human approval. “Ask before modifying database schemas.” “Ask before adding new dependencies.”

Never do: Hard stops. “Never commit secrets or API keys.” “Never edit node_modules/ or vendor/.” “Never remove a failing test without explicit approval.”

That framing—always, ask first, never—gives the AI a decision framework instead of a wall of instructions. It maps to how you’d manage a person, too. Osmani quotes Simon Willison on the comparison: getting good results from a coding agent feels “uncomfortably close to managing a human intern.”

Klaassen’s compound engineering is one version of this. Osmani’s spec framework is another. The principle underneath both: teach fewer things well rather than everything at once.

Two humanoid robots inspect a giant iridescent aqua scroll unrolling from a metal roller in a sunlit hall.

How to Write a Good Spec for AI Agents

This post first appeared on Addy Osmani’s Elevate Substack newsletter and is being republished here with the author’s permission.TL;DR: Aim for a clear

oreilly.com iconoreilly.com

Most people using AI to write code are still reviewing every line. Kieran Klaassen stopped doing that months ago.

Kieran Klaassen, CTO of Cora at Every, on Peter Yang’s channel, He calls his approach compound engineering:

AI can learn. If you invest time to have the AI learn what you like and learn what it does wrong, it won’t do it the next time. So that’s the seed for compound engineering. There are four steps: planning first, working—which is just doing the work from the plan—then assessing and reviewing, making sure the work that’s done is correct, and then taking the learnings from that process and codifying them. So the next time you create a plan, it’s there. It learned.

Plan, build, review, codify. Each cycle teaches the AI something it keeps. You hit a problem, you capture the fix, and that fix lives in your repo as documentation the AI reads next time. The learnings compound across sessions.

The result: Klaassen says 100% of his code is now AI-written. He hasn’t opened Cursor in three months. But he’s not winging it. On what that trust actually requires:

It’s a little bit more of like, I trust you. I don’t need to look at all the code. I don’t need to read all the code, but I have systems and ways I work with AI that I trust, and through that I can let AI do things.

That trust is earned through the loop. Mistakes get caught, codified, and they don’t happen twice. Klaassen compares it to onboarding:

It’s similar to onboarding a person on your team. You need to get them on board, get them used to your code. But once that is done, you can let them go and really just run with it.

How to Make Claude Code Better Every Time You Use It (50 Min Tutorial) | Kieran Klaassen

Kieran my favorite Claude Code power user and teacher. In our interview, he walked through his Compound Engineering system that makes Claude Code better every time you use it. This same system has been embraced by the Claude Code team and others. Kieran is like Morpheus introducing me to the matrix, so don’t miss this episode 🙂

youtube.com iconyoutube.com

Boris Cherny, head of Claude Code at Anthropic, on Lenny’s Podcast:

I think at this point it’s safe to say that coding is largely solved. At least for the kind of programming that I do, it’s just a solved problem because Claude can do it. And so now we’re starting to think about what’s next, what’s beyond this. Claude is starting to come up with ideas. It’s looking through feedback. It’s looking at bug reports. It’s looking at telemetry for bug fixes and things to ship—a little more like a co-worker or something like that.

“Largely solved” is a big claim from the person running the tool that’s solving it. And then he goes further—Claude is starting to decide what to build. That’s product management work.

Cherny on what his team at Anthropic already looks like:

On the Claude Code team, everyone codes. Our product manager codes, our engineering manager codes, our designer codes, our finance guy codes, our data scientist codes.

And on where the role boundaries are heading:

There’s maybe a 50% overlap in these roles where a lot of people are actually just doing the same thing and some people have specialties. I think by the end of the year the title software engineer is going to start to go away and it’s just going to be replaced by builder. Or maybe everyone’s going to be a product manager and everyone codes.

But where does design fit in all this? A PM can define the problem, maybe even come up with a good solution. But does Cherny think that AI will be the designer?

Lenny ran polls asking engineers, PMs, and designers whether they enjoy their jobs more or less since adopting AI. Engineers and PMs: 70% said more. Designers went the other direction with only 55% who said they were enjoying their job more, and 18%—nearly twice as many as engineers—said they were enjoying their job less.

Cherny’s reaction:

Our designers largely code. So I think for them this is something that they have enjoyed because they can unblock themselves.

That’s an engineer’s answer to a design question. Designers at Anthropic are happy because they can ship without waiting on a developer. But “unblocking yourself” isn’t the same as “AI can do the design.” Cherny doesn’t touch the user experience, visual thinking, the spatial reasoning.

My theory: Designers are visual people. Typing to design doesn’t really compute. And who can blame us?

Head of Claude Code: What happens after coding is solved | Boris Cherny

Boris Cherny is the creator and head of Claude Code at Anthropic. What began as a simple terminal-based prototype just a year ago has transformed the role of software engineering and is increasingly transforming all professional work. *We discuss:* 1. How Claude Code grew from a quick hack to 4% of public GitHub commits, with daily active users doubling last month 2. The counterintuitive product principles that drove Claude Code’s success 3. Why Boris believes coding is “solved” 4. The latent demand that shaped Claude Code and Cowork 5. Practical tips for getting the most out of Claude Code and Cowork 6. How underfunding teams and giving them unlimited tokens leads to better AI products 7. Why Boris briefly left Anthropic for Cursor, then returned after just two weeks 8. Three principles Boris shares with every new team member *Brought to you by:* DX—The developer intelligence platform designed by leading researchers: https://getdx.com/lenny Sentry—Code breaks, fix it faster: https://sentry.io/lenny Metaview—The AI platform for recruiting: https://metaview.ai/lenny *Episode transcript:* https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens *Archive of all Lenny’s Podcast transcripts:* https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0 *Where to find Boris Cherny:* • X: https://x.com/bcherny • LinkedIn: https://www.linkedin.com/in/bcherny • Website: https://borischerny.com *Where to find Lenny:* • Newsletter: https://www.lennysnewsletter.com • X: https://twitter.com/lennysan • LinkedIn: https://www.linkedin.com/in/lennyrachitsky/ *In this episode, we cover:* (00:00) Introduction to Boris and Claude Code (03:45) Why Boris briefly left Anthropic for Cursor (and what brought him back) (05:35) One year of Claude Code (08:41) The origin story of Claude Code (13:29) How fast AI is transforming software development (15:01) The importance of experimentation in AI innovation (16:17) Boris’s current coding workflow (100% AI-written) (17:32) The next frontier (22:24) The downside of rapid innovation (24:02) Principles for the Claude Code team (26:48) Why you should give engineers unlimited tokens (27:55) Will coding skills still matter in the future? (32:15) The printing press analogy for AI’s impact (36:01) Which roles will AI transform next? (40:41) Tips for succeeding in the AI era (44:37) Poll: Which roles are enjoying their jobs more with AI (46:32) The principle of latent demand in product development (51:53) How Cowork was built in just 10 days (54:04) The three layers of AI safety at Anthropic (59:35) Anxiety when AI agents aren’t working (01:02:25) Boris’s Ukrainian roots (01:03:21) Advice for building AI products (01:08:38) Pro tips for using Claude Code effectively (01:11:16) Thoughts on Codex (01:12:13) Boris’s post-AGI plans (01:14:02) Lightning round and final thoughts *Referenced:* • Cursor: https://cursor.com • The rise of Cursor: The $300M ARR AI tool that engineers can’t stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell • Anthropic: https://www.anthropic.com • Anthropic’s CPO on what comes next | Mike Krieger (co-founder of Instagram): https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what-comes-next • Claude Code Is the Inflection Point: https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point • Spotify says its best developers haven’t written a line of code since December, thanks to AI: https://techcrunch.com/2026/02/12/spotify-says-its-best-developers-havent-written-a-line-of-code-since-december-thanks-to-ai/ • Anthropic co-founder on quitting OpenAI, AGI predictions, $100M talent wars, 20% unemployment, and the nightmare scenarios keeping him up at night | Ben Mann: https://www.lennysnewsletter.com/p/anthropic-co-founder-benjamin-mann • Haiku: https://www.anthropic.com/claude/haiku • Sonnet: https://www.anthropic.com/claude/sonnet • Opus: https://www.anthropic.com/claude/opus • Jenny Wen on X: https://x.com/jenny_wen • Johannes Gutenberg: https://en.wikipedia.org/wiki/Johannes_Gutenberg • Anthropic jobs: https://www.anthropic.com/careers/jobs • Lenny’s AI poll post on X: https://x.com/lennysan/status/2020266745722991051 • Fiona Fung on LinkedIn: https://www.linkedin.com/in/fionafung • Brandon Kurkela on LinkedIn: https://www.linkedin.com/in/bkurkela • Cowork: https://www.anthropic.com/webinars/future-of-ai-at-work-introducing-cowork • Chris Olah on X: https://x.com/ch402 • The Bitter Lesson: http://www.incompleteideas.net/IncIdeas/BitterLesson.html ...References continued at: https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens _Production and marketing by https://penname.co/._ _For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com._ Lenny may be an investor in the companies discussed.

youtube.com iconyoutube.com

I’ve been arguing that the designer’s job is shifting from execution to orchestration—directing AI agents rather than pushing pixels. I made that case from the design side. Addy Osmani just made it from engineering based on what he’s seeing.

Osmani draws a hard line between vibe coding and what he calls “agentic engineering.” On vibe coding:

Vibe coding means going with the vibes and not reviewing the code. That’s the defining characteristic. You prompt, you accept, you run it, you see if it works. If it doesn’t, you paste the error back and try again. You keep prompting. The human is a prompt DJ, not an engineer.

“Prompt DJ” is good. But Osmani’s description of the disciplined version is what caught my attention—it’s the same role I’ve been arguing designers need to grow into:

You’re orchestrating AI agents - coding assistants that can execute, test, and refine code - while you act as architect, reviewer, and decision-maker.

Osmani again:

AI didn’t cause the problem; skipping the design thinking did.

An engineer wrote that. The spec-first workflow Osmani describes is design process applied to code. Designers have been saying “define the problem before you jump to solutions” for decades. AI just made that advice load-bearing for engineers too.

The full piece goes deep on skill gaps, testing discipline, and evaluation frameworks—worth a complete read.

White serif text reading "Agentic Engineering" centered on a black background.

Agentic Engineering

Agentic Engineering is a disciplined approach to AI-assisted software development that emphasizes human oversight and engineering rigor, distinguishing it fr...

addyosmani.com iconaddyosmani.com

Tommaso Nervegna, a Design Director at Accenture Song, gives one of the clearest practitioner accounts I’ve seen of what using Claude Code as a designer looks like day to day.

The guide is detailed—installation steps, terminal commands, deployment. This is essential reading for any designer interested in Claude Code. But for me, the interesting part isn’t the how-to. It’s his argument that raw AI coding tools aren’t enough without structure on top:

Claude Code is powerful, but without proper context engineering, it degrades as the conversation gets longer.

Anyone who’s used these tools seriously has experienced this. You start a session and the output is sharp. Forty minutes in, it’s forgotten your constraints and is hallucinating component names. Nervegna uses a meta-prompting framework called Get Shit Done that breaks work into phases with fresh contexts—research, planning, execution, verification—each getting its own 200K token window. No accumulated garbage.

The framework ends up looking a lot like good design process applied to AI:

Instead of immediately generating code, it asks:

“What happens when there’s no data to display?” “Should this work on mobile?” “What’s the error state look like?” “How do users undo this action?”

Those are the questions a senior designer asks in a review. Nervegna calls it “spec-driven development,” but it’s really the discipline of defining the problem before jumping to solutions—something our profession has always preached and often ignored when deadlines hit.

Nervegna again:

This is spec-driven development, but the spec is generated through conversation, not written in Jira by a project manager.

The specification work that used to live in PRDs and handoff docs is happening conversationally now, between a designer and an AI agent. The designer’s value is in the questions asked before any code gets written.

Terminal-style window reading "CLAUDE CODE FOR DESIGNERS — A PRACTICAL GUIDE" over coral background with black design-tool icons.

Claude Code for Designers: A Practical Guide

A Step-by-Step Guide to Designing and Shipping with Claude Code

nervegna.substack.com iconnervegna.substack.com

In a Jason Lemkin piece on SaaStr, Intercom CPO Paul Adams describes what happened to his design team over the last 18 months:

Every single designer at Intercom now ships code to production. Zero did 18 months ago. The mandate was clear: this is now part of your job. If you don’t like it, find somewhere that doesn’t require it, and they’ll hire designers who love the idea.

Not a pilot program nor an optional workshop. It was a mandate. Adams basically said, “This is your job now, or it isn’t your job here anymore.” (I do note the language here is indifferent to the real human cost.)

But the designers-shipping-code mandate is one piece of a larger consolidation. Adams applies a simple test across the entire org: what would a brand new startup incorporated today do here?

Would they have separate product marketers and content marketers? Or is that the same job now? Would they have both product managers and product designers as distinct roles? The answer usually points to consolidation, not specialization.

There it is again, the compression of roles.

But Adams isn’t just asking the question. He took over two-thirds of Intercom’s marketing six months ago and rebuilt it from scratch—teams, roadmaps, calendars, gone.

All of the above is a glimpse of what Matt Shumer was talking about in “Something Big Is Happening.”

The way the product gets built has changed too. Adams describes Intercom’s old process versus the new one:

The old way: Pick a job to be done → Listen to customers → Design a solution → Build and ship. Execution was certain. Technology was stable. Design was the hard part. The new way: Ask what AI makes possible → Prototype to see if you can build it reliably → Build the UX later → Ship → Learn at scale.

“Build the UX later” is a scary thought, isn’t it? In many ways, we must unlearn what we have learned, to quote Yoda. Honestly though, that’s easier said than done and is highly dependent on how forgiving your userbase is.

Why Most B2B Companies Are Failing at AI (And How to Avoid It) with Intercom’s CPO

Why Most B2B Companies Are Failing at AI (And How to Avoid It) with Intercom’s CPO

How Intercom Bet Everything on AI—And Built Fin to 1M+ Resolutions Per Week Paul Adams is Chief Product Officer at Intercom, leading Product Management, Product Design, Data Science, and Research. …

saastr.com iconsaastr.com

Jeff Bezos introduced the two-pizza rule in 2002: if a team needs more than two pizzas to eat, it’s too big. It became gospel for how to organize product teams. Dan Shipper thinks the number just got a lot smaller:

We have four software products, each run by a single person. Ninety-nine percent of our code is written by AI agents. Overall, we have six business units with just 20 full-time employees.

Two pizzas down to two slices. Two slices per person. One person per product. And these aren’t demos or side projects. Shipper’s numbers on one of them:

Monologue, our smart dictation app run by Naveen Naidu, is used about 30,000 times a day to transcribe 1.5 million words. The codebase totals 143,000 lines of code and Naveen’s written almost every single line of it himself with the help of Codex and Opus.

A year ago that would have been a team of four or five engineers plus a PM plus a designer. Shipper himself built a separate product—a Markdown editor—and describes the compression:

An editor like this would have previously taken 3-4 engineers six months to build. Instead, I made it in my spare time.

“In my spare time” is doing a lot of work in that sentence. This is what the small teams, big leverage argument looks like when you stop theorizing and start counting.

Two classical statue profiles exchange pepperoni pizza slices over a blue sky, with a small temple in the background.

The Two-slice Team

Amazon’s “two-pizza rule” worked for the past twenty-four years. We need a new heuristic for the next twenty-four.

every.to iconevery.to

What’s Next in Vertical SaaS

After posting my essay about Wall Street and the B2B software stocks tumbling, I came across a few items that pulls on the thread even more, to something forward-looking.

Firstly, my old colleague Shawn Smith had a more nuanced reaction to the story. Smith has been both a customer many times over of Salesforce and a product manager there.

On the customer side, without exception, the sentiment was that Salesforce is an expensive partial solution. There were always gaps in what it could do, which were filled by janky workarounds. In every case, the organization at least considered building an in-house solution which would cover all the bases *and* cost less than the Salesforce contract. I think the threat of AI to Salesforce is very real in this sense. Companies will use it to build their own solutions, but this outcome is probably at least 2-5 years out in many cases because switching costs are real, and contracts are an obstacle.

He is less convinced about something like Adobe where individual preferences around tooling are more of the determining factor. The underlying threat in Smith’s analysis—that companies will build their own solutions—points to a deeper question about which software businesses have real moats. Especially with newer, AI-native upstarts.

Anthropic published a study that puts numbers to something I’ve been writing about in the design context for a while now. They ran a randomized controlled trial with 52 junior software engineers learning a new Python library. Half used AI assistance. Half coded by hand.

Judy Hanwen Shen and Alex Tamkin, writing for Anthropic Research:

Participants in the AI group scored 17% lower than those who coded by hand, or the equivalent of nearly two letter grades. Using AI sped up the task slightly, but this didn’t reach the threshold of statistical significance.

So the AI group didn’t finish meaningfully faster, but they understood meaningfully less. And the biggest gap was in debugging—the ability to recognize when code is wrong and figure out why. That’s the exact skill you need most when your job is to oversee AI-generated output.

The largest gap in scores between the two groups was on debugging questions, suggesting that the ability to understand when code is incorrect and why it fails may be a particular area of concern if AI impedes coding development.

This is the same dynamic I fear in design. When I wrote about the design talent crisis, educators like Eric Heiman told me “we internalize so much by doing things slower… learning through tinkering with our process, and making mistakes.” Bradford Prairie put it more bluntly: “If there’s one thing that AI can’t replace, it’s your sense of discernment for what is good and what is not good.” But discernment comes from reps, and AI is eating the reps.

The honest framing from Anthropic’s own researchers:

It is possible that AI both accelerates productivity on well-developed skills and hinders the acquisition of new ones.

Credit to Anthropic for publishing research that complicates the case for their own product. And the study’s footnote is worth noting: they used a chat-based AI assistant, not an agentic tool like Claude Code. Their expectation is that “the impacts of such programs on skill development are likely to be more pronounced.”

I can certainly attest that when I use Claude Code, I have no idea what’s going on!

The one bright spot: not all AI use was equal. Participants who asked conceptual questions and used AI to check their understanding scored well. The ones who delegated code generation wholesale scored worst. The difference was whether you were thinking alongside the tool or letting it think for you.

Cognitive effort—and even getting painfully stuck—is likely important for fostering mastery.

Getting painfully stuck. That’s the apprenticeship. That’s the grunt work. And it’s exactly what we’re optimizing away.

Stylized hand pointing to a white sheet with three horizontal rows of black connected dots on a beige background.

How AI assistance impacts the formation of coding skills

Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems.

anthropic.com iconanthropic.com

I recall being in my childhood home in San Francisco, staring at the nine-inch monochrome screen on my Mac, clicking square zoning tiles, building roads, and averting disasters late into the night. Yes, that was SimCity in 1989. I’d go on to play pretty much every version thereafter, though the mobile one isn’t quite the same.

Anyhow, Andy Coenen, a software engineer at Google Brain, decided to build a SimCity version of New York as a way to learn some of the newer gen AI models and tools:

Growing up, I played a lot of video games, and my favorites were world building games like SimCity 2000 and Rollercoaster Tycoon. As a core millennial rapidly approaching middle age, I’m a sucker for the nostalgic vibes of those late 90s / early 2000s games. As I stared out at the city, I couldn’t help but imagine what it would look like in the style of those childhood memories.

So here’s the idea: I’m going to make a giant isometric pixel-art map of New York City. And I’m going to use it as an excuse to push hard on the limits of the latest and greatest generative models and coding agents.

Best case scenario, I’ll make something cool, and worst case scenario, I’ll learn a lot.

The writeup goes deep into the technical process—real NYC city data, fine-tuned image models, custom generation pipelines, and a lot of manual QA when the models couldn’t get water and trees right. Worth reading in full if you’re curious. But his conclusion on what AI means for creative work is where I want to focus.

Coenen on drudgery:

…So much of creative work is defined by this kind of tedious grind.

For example, [as a musician] after recording a multi-part vocal harmony you change something in the mix and now it feels like one of the phrases is off by 15 milliseconds. To fix it, you need to adjust every layer - and this gets more convoluted if you’re using plugins or other processing on the material.

This isn’t creative. It’s just a slog. Every creative field - animation, video, software - is full of these tedious tasks. Of course, there’s a case to be made that the very act of doing this manual work is what refines your instincts - but I think it’s more of a “Just So” story than anything else. In the end, the quality of art is defined by the quality of your decisions - how much work you put into something is just a proxy for how much you care and how much you have to say.

I’d push back slightly on the “Just So story” part—repetition does build instincts that are hard to shortcut. But the broader point holds. And his closer echoes my own sentiment after finishing a massive gen AI project:

If you can push a button and get content, then that content is a commodity. Its value is next to zero.

Counterintuitively, that’s my biggest reason to be optimistic about AI and creativity. When hard parts become easy, the differentiator becomes love.

Check out Coenen’s project here. I think the only thing that’s missing are animated cars on the road.

Bonus: If you’re like me or Andy Coenen and loved SimCity, there’s an online free and open-source game called IsoCity that you can play. Runs natively in-browser.

Isometric pixel-art NYC skyline showing dense skyscrapers, streets, a small park, riverside and a UI title bar with mini-map.

isometric-nyc

cannoneyed.com iconcannoneyed.com

I write everything in Markdown now. These link posts start in Obsidian, which stores them as .md files. When I rebuilt my blog with Astro, I moved from a database to plain Markdown files. It felt like going backwards—and also exactly right.

Anil Dash has written a lovely history of how John Gruber’s simple text format quietly became the infrastructure of the modern internet:

The trillion-dollar AI industry’s system for controlling their most advanced platforms is a plain text format one guy made up for his blog and then bounced off of a 17-year-old kid [Aaron Swartz] before sharing it with the world for free.

The format was released in 2004, the same year blogs went mainstream. Twenty years later, it’s everywhere—Google Docs, GitHub, Slack, Apple Notes, and every AI prompt you’ve ever written.

Dash’s larger point is about how the internet actually gets built:

Smart people think of good things that are crazy enough that they just might work, and then they give them away, over and over, until they slowly take over the world and make things better for everyone.

Worth a full read.

White iMac on wooden desk with white keyboard, round speakers, colored pencils and lens holder; screen shows purple pattern.

How Markdown took over the world

Anil Dash. A blog about making culture. Since 1999.

anildash.com iconanildash.com

When I managed over 40 creatives at a digital agency, the hardest part wasn’t the work itself—it was resource allocation. Who’s got bandwidth? Who’s blocked waiting on feedback? Who’s deep in something and shouldn’t be interrupted? You learn to think of your team not as individuals you assign tasks to, but as capacity you orchestrate.

I was reminded of that when I read about Boris Cherny’s approach to Claude Code. Cherny is a Staff Engineer at Anthropic who helped build Claude Code. Karo Zieminski, writing in her Product with Attitude Substack, breaks down how Cherny actually uses his own tool:

He keeps ~10–15 concurrent Claude Code sessions alive: 5 in terminal (tabbed, numbered, with OS notifications). 5–10 in the browser. Plus mobile sessions he starts in the morning and checks in on later. He hands off sessions between environments and sometimes teleports them back and forth.

Zieminski’s analysis is sharp:

Boris doesn’t see AI as a tool you use, but as a capacity you schedule. He’s distributing cognition like compute: allocate it, queue it, keep it hot, switch contexts only when value is ready. The bottleneck isn’t generation; it’s attention allocation.

Most people treat AI assistants like a single very smart coworker. You give it a task, wait for the answer, evaluate, iterate. Cherny treats Claude like a team—multiple parallel workers, each holding different context, each making progress while he’s focused elsewhere.

Zieminski again:

Each session is a separate worker with its own context, not a single assistant that must hold everything. The “fleet” approach is basically: don’t make one brain do all jobs; run many partial brains.

I’ve been using Claude Code for months, but mostly one session at a time. Reading this, I realize I’ve been thinking too small. The parallel session model is about working efficiently. Start a research task in one session, let it run while you code in another, check back when it’s ready.

Looks like the new skill on the block is orchestration.

Cartoon avatar in an orange cap beside text "I'm Boris and I created Claude Code." with "6.4M Views" in a sketched box.

How Boris Cherny Uses Claude Code

An in-depth analysis of how Boris Cherny, creator of Claude Code, uses it — and what it reveals about AI agents, responsibility, and product thinking.

open.substack.com iconopen.substack.com

Product manager Adrian Raudaschl offered some reflections on 2025 from his point of view. It’s a mixture of life advice, product recommendations, and thoughts about the future of tech work.

The first quote I’ll pull out is this one, about creativity and AI:

Ultimately, if we fail to maintain active engagement with the creative process and merely delegate tasks to AI without reflection, there is a risk that delegation becomes abdication of responsibility and authorship.

“Active engagement” with the tasks that we delegate to AI. This reminds me of the humble machines argument by Dr. Maya Ackerman.

On vibe coding:

The most important thing, I think, that most people in knowledge work should be doing is learning to vibe code. Vibe code anything: a diary, a picture book for your mum, a fan page for your local farm. Anything. It’s not about learning to code, but rather appreciating how much more we could do with machines than before. This is what I mean about the generalist product manager: being able to prototype, test, and build without being held back by technical constraints.

I concur 100%. Even if you don’t think you’re a developer, even if you don’t quite understand code, vibe coding something will be illuminating. I think it’s different than asking ChatGPT for a bolognese sauce recipe or how to change a tire. Building something that will instantly run on your computer and seeing the adjustments made in real-time from your plain English prompts is very cool and gives you a glimpse into how LLMs problem-solve.

A product manager’s 48 reflections on 2025

A product manager’s 48 reflections on 2025

and why I’ve been making Bob Dylan songs about Sonic the Hedgehog

uxdesign.cc iconuxdesign.cc

Someone on X recently claimed they “popularized” dithering in modern design—a bold claim for a technique that’s been around since Robert Floyd and Louis Steinberg formalized it in 1976 and Bill Atkinson refined it for the original Macintosh. The design community swiftly reminded them that revival isn’t invention, and that dithering’s current moment owes more to retro-tech aesthetics meeting modern GPU pipelines than to any single designer’s genius.

Speaking of which, here’s a visual explainer by Damar Aji Pramudita that’s worth your time on how dithering actually works. Apparently it’s only part one of three.

Halftone black-and-white portrait split across a folded, book-like panel on a yellow background; "visualrambling.space" at top-left.

Dithering - Part 1

Understanding how dithering works, visually.

visualrambling.space iconvisualrambling.space

While this piece by Matias Heikkilä is from a developer’s point-of-view, it’s applicable to designers. He poses a conceit: LLMs are good at coding, but can’t see the bigger picture and build software. To be sure, Cursor and Claude Code reason and produce plans. I’ve given both fairly small products to build. Their plans look good, but when they try to implement, invariably they’ll hit a snag. They’ll confidently say “It’s done!” with a green checkmark emoji. And then when I go to run it, the program invariably fails.

Heikkilä, writing in his company’s blog:

There is old wisdom that says: Coding is easy, software engineering is hard. It seems fair enough to say that LLMs are already able to automate a lot of coding. GPT-5 and the like solve isolated well-defined problems with a pretty nice success rate. Coding, however, is not what most people are getting paid for. Building a production-ready app is not coding, it’s software engineering.

ByteSauna wordmark: white angled brackets surround three red steam lines, with "ByteSauna" text to the right.

AI can code, but it can’t build software

AI can write code, but it can’t build real software. Software engineering remains human work because AI can code, not engineer.

bytesauna.com iconbytesauna.com

I think the headline is a hard stance, but I appreciate the sentiment. All the best designers and creatives—including developers—I’ve ever worked with do things on the side. Or in Rohit Prakash’s words, they tinker. They’re always making something, learning along the way.

Prakash, writing in his blog:

Acquiring good taste comes through using various things, discarding the ones you don’t like and keeping the ones you do. if you never try various things, you will not acquire good taste.

It’s important for designers to see other designs and use other products—if you’re a software designer. It’s equally important to look up from Dribbble, Behance, Instagram, and even this blog and go experience something unrelated to design. Art, concerts, cooking. All of it gets synthesized through your POV and becomes your taste.

Large white text "@seatedro on x dot com" centered on a black background.

If you don’t tinker, you don’t have taste

programmer by day, programmer by night.

seated.ro iconseated.ro

It’s interesting to me that Figma had to have a separate conference and set of announcements focused on design systems. In some sense it’s an indicator of how big and mature this part of design has become.

A few highlights from my point-of-view…

Slots seems to solve one of those small UX paper cuts—those niggly inconveniences that we just lived with. But this is a big deal. You’ll be able to add layers within component instances without breaking the connection to your design system. No more pre-building hidden list items or forcing designers to detach components. Pretty advanced stuff.

On the code front, they’re making Code Connect actually approachable with a new UI that connects directly to GitHub and uses AI to map components. The Figma MCP server is out of beta and now supports design system guidelines—meaning your agentic coding tools can actually respect your design standards. Can’t wait to try these.

For teams like mine that are using Make, you’ll be able to pull in design systems through two routes: Make kits (generate React and CSS from Figma libraries) or npm package imports (bring in your existing code components). This is the part where AI-assisted design doesn’t have to mean throwing pixelcraft out the window.

Design systems have always been about maintaining quality at scale. These updates are very welcomed.

Bright cobalt background with "schema" in a maroon bar and light-blue "by Figma" text, stepped columns of orange semicircles on pale-cyan blocks along right and bottom.

Schema 2025: Design Systems For A New Era

As AI accelerates product development, design systems keep the bar for craft and quality high. Here’s everything we announced at Schema to help teams design for the AI era.

figma.com iconfigma.com

As a follow-up to our previous item on Claude Code, here’s an article by Nick Babich who gives us three ways product designers can use Claude to code.

Remember that Anthropic’s Claude has been the leading LLM for coding for a while now.

Claude For Code: How to use Claude to Streamline Product Design Process

Claude For Code: How to use Claude to Streamline Product Design Process

Anthropic Claude is a primary competitor of OpenAI’s ChatGPT. Just like ChatGPT this is versatile tool that can be use used in many…

uxplanet.org iconuxplanet.org

With Cursor and Lovable as the darlings of AI coding tools, don’t sleep on Claude Code. Personally, I’ve been splitting my time between Claude Code and Cursor. While Claude Code’s primary persona is coders and tinkerers, it can be used for so much more.

Lenny Rachitsky calls it “the most underrated AI tool for non-technical people.”

The key is to forget that it’s called Claude Code and instead think of it as Claude Local or Claude Agent. It’s essentially a super-intelligent AI running locally, able to do stuff directly on your computer—from organizing your files and folders to enhancing image quality, brainstorming domain names, summarizing customer calls, creating Linear tickets, and, as you’ll see below, so much more.

Since it’s running locally, it can handle huge files, run much longer than the cloud-based Claude/ChatGPT/Gemini chatbots, and it’s fast and versatile. Claude Code is basically Claude with even more powers.

Rachitsky shares 50 of his “favorite and most creative ways non-technical people are using Claude Code in their work and life.”

Everyone should be using Claude Code more

Everyone should be using Claude Code more

How to get started, and 50 ways non-technical people are using Claude Code in their work and life

lennysnewsletter.com iconlennysnewsletter.com

Auto-Tagging the Post Archive

Since I finished migrating my site from Next.js/Payload CMS to Astro, I’ve been wanting to redo the tag taxonomy for my posts. They’d gotten out of hand over time, and the tag tumbleweed grew to more than 80 tags. What the hell was I thinking when I had both “product design” and “product designer”?

Anyway, I tried a few programmatic ways to determine the best taxonomy, but ultimately manually culled it down to 29 tags. Then, I really didn’t want to have to manually go back and re-tag more than 350 posts. So I turned to AI. It took two attempts. The first one that Cursor planned for me used ML to discern the tags, but that failed spectacularly because it was using frequency of words, not semantic meaning.

So I ultimately tried an LLM approach and that worked. I spec’d it out and had Claude Code write it for me. Then after another hour or so of experimenting and seeing if the resulting tags worked, I let it run concurrently in four terminal windows to process all the posts from the past 20 years. Et voila!

I spot-checked at least half of all the posts manually and made some adjustments. But I’m pretty happy with the results.

See the new tags on the Search page or just click around and explore.

A computer circuit board traveling at warp speed through space with motion-blurred light streaks radiating outward, symbolizing high-performance computing and speed.

The Need for Speed: Why I Rebuilt My Blog with Astro

Two weekends ago, I quietly relaunched my blog. It was a heart transplant really, of the same design I’d launched in late March.

The First Iteration

Back in early November of last year, I re-platformed from WordPress to a home-grown, Cursor-made static site generator. I’d write in Markdown and push code to my GitHub repository and the post was published via Vercel’s continuous deployment feature. The design was simple and it was a great learning project for me.

Is the AI bubble about to burst? Apparently, AI prompt-to-code tools like Lovable and v0 have peaked and are on their way down.

Alistair Barr writing for Business Insider:

The drop-off raises tough questions for startups that flaunted exponential annual recurring revenue growth just months ago. Analysts wrote that much of that revenue comes from month-to-month subscribers who may churn as quickly as they signed up, putting the durability of those flashy numbers in doubt.

Barr interviewed Eric Simons, CEO of Bolt who said:

“This is the problem across all these companies right now. The churn rate for everyone is really high,” Simons said. “You have to build a retentive business.”

AI vibe coding tools were supposed to change everything. Now traffic is crashing.

AI vibe coding tools were supposed to change everything. Now traffic is crashing.

Vibe coding tools have seen traffic drop, with Vercel’s v0 and Lovable seeing significant declines, raising sustainability questions, Barclays warns.

businessinsider.com iconbusinessinsider.com

In many ways, this excellent article by Kaustubh Saini for Final Round AI’s blog is a cousin to my essay on the design talent crisis. But it’s about what happens when people “become” developers and only know vibe coding.

The appeal is obvious, especially for newcomers facing a brutal job market. Why spend years learning complex programming languages when you can just describe what you want in plain English? The promise sounds amazing: no technical knowledge required, just explain your vision and watch the AI build it.

In other words, these folks don’t understand the code and, well, bad things can happen.

The most documented failure involves an indie developer who built a SaaS product entirely through vibe coding. Initially celebrating on social media that his “saas was built with Cursor, zero hand written code,” the story quickly turned dark.

Within weeks, disaster struck. The developer reported that “random things are happening, maxed out usage on api keys, people bypassing the subscription, creating random shit on db.” Being non-technical, he couldn’t debug the security breaches or understand what was going wrong. The application was eventually shut down permanently after he admitted “Cursor keeps breaking other parts of the code.”

This failure illustrates the core problem with vibe coding: it produces developers who can generate code but can’t understand, debug, or maintain it. When AI-generated code breaks, these developers are helpless.

I don’t foresee something this disastrous with design. I mean, a newbie designer wielding an AI-enabled Canva or Figma can’t tank a business alone because the client will have eyes on it and won’t let through something that doesn’t work. It could be a design atrocity, but it’ll likely be fine.

This *can *happen to a designer using vibe coding tools, however. Full disclosure: I’m one of them. This site is partially vibe-coded. My Severance fan project is entirely vibe-coded.

But back to the idea of a talent crisis. In the developer world, it’s already happening:

The fundamental problem is that vibe coding creates what experts call “pseudo-developers.” These are people who can generate code but can’t understand, debug, or maintain it. When AI-generated code breaks, these developers are helpless.

In other words, they don’t have the skills necessary to be developers because they can’t do the basics. They can’t debug, don’t understand architecture, have no code review skills, and basically have no fundamental knowledge of what it means to be a programmer. “They miss the foundation that allows developers to adapt to new technologies, understand trade-offs, and make architectural decisions.”

Again, assuming our junior designers have the requisite fundamental design skills, not having spent time developing their craft and strategic skills through experience will be detrimental to them and any org that hires them.

preview-1753377392986.jpg

How AI Vibe Coding Is Destroying Junior Developers’ Careers

New research shows developers think AI makes them 20% faster but are actually 19% slower. Vibe coding is creating unemployable pseudo-developers who can’t debug or maintain code.

finalroundai.com iconfinalroundai.com