Skip to content

87 posts tagged with “tools”

Gui Seiz designs at Figma. His team uses Claude Code to bridge design and code. And he still reaches for the canvas when precision matters.

Seiz, speaking on Claire Vo’s How I AI podcast:

I don’t think we’re there yet in general with these code tools in terms of the precision editing that you want to do. […] I think still the gold standard for me is just being able to drag stuff around. And you can do a lot with a click that would take you a hundred words to write and to really precisely nail. No one wants to prompt for the exact hex code or the shade of yellow and that kind of stuff. That’s just easier to just quickly do and directly manipulate.

Seiz isn’t anti-AI. His team pulls production code into Figma via MCP, edits it visually, and pushes it back to the codebase. He’s bullish on what that does to the old workflow:

It’s definitely changed our workflows in a way that it’s really blown up what a workflow even is. Before, for the majority of our careers, we’ve had a very linear, agreed-upon workflow where you increase fidelity as you go on. Because it’s really expensive to work in code, and it’s really cheap just to trade ideas and sketch them out. But AI basically collapsed that, and it’s just as cheap to riff in code as it is to riff in design.

The cost of exploration collapsed. The need for direct manipulation didn’t. Both can be true.

How Figma engineers sync designs with Claude Code and Codex

Most teams are still passing static design files back and forth, and most Figma files are already out of date by the time they reach engineering. Gui Seiz (designer) and Alex Kern (engineer) from Figma walk through the exact workflow their team uses to bridge that gap with AI, live onscreen. They…

youtube.com iconyoutube.com

I’ve argued that design tools should be canvas-first, not chatbox-first. Jeff, writing in Abduzeedo makes the case for the opposite:

Designers have always borrowed from developers. Version control, component systems, token-based design — these ideas crossed the aisle from engineering and reshaped how visual work gets done. Vibe designing follows the same logic. Instead of opening Figma and reaching for a drag-and-drop panel, designers drop into the terminal. They prompt an AI model directly from the CLI, pipe the output into a file, and iterate without ever touching a mouse.

He isn’t theorizing. He published this article using browser automation and AI, with minimal manual clicking.

I don’t think the answer is CLI or canvas. It’s both. Designers are visual thinkers—that’s the cognitive foundation of the discipline, not a limitation to engineer away. Going fully terminal assumes we can be retrained to work without seeing what we’re making, or that the profession will attract people with entirely different skills.

What does look right is the plumbing underneath. Jeff on Paper.design’s MCP integration:

Its canvas is built natively on web standards — HTML and CSS — which means AI agents working through Paper’s MCP server can read and write design files directly. Tools like get_screenshot, get_jsx, write_html, and update_styles give Claude Code or Cursor direct read-write access to the design canvas.

HyperCard figured this out in 1987: direct manipulation on top of a scripting layer. The tools are finally catching up, with AI as the scripting engine.

VS Code editor with a browser preview showing the "Abduzeedo Editor" app, displaying a portrait photo with a VHS glitch shader effect applied.

Vibe Designing with Bash Access

Vibe designing is the design equivalent of vibe coding — where bash scripts, AI tools, and CLI commands are finally replacing traditional GUI-only tools.

abduzeedo.com iconabduzeedo.com

Intercom’s design team published numbers that show what happens when agents take over the build. John Moriarty, writing for Fin Ideas:

At Intercom, how we design and build software is unrecognizable from 12 months ago. Our engineering team is already at the point where 90% of pull requests are authored by Claude Code, part of an internal initiative called 2x, where the explicit goal is to double productivity using AI.

When 90% of your pull requests are AI-authored, the designer’s job changes whether you update the title or not. Moriarty’s framework for what comes next:

As the rate of execution accelerates, the role of design becomes sharper. Agents can generate artefacts, but they cannot decide which problems matter, set intent, resolve trade-offs, or hold the bar for quality. Our craft shifts with that reality. […] Agents will own the middle, the build. Design’s value concentrates at the edges, deciding what to build and then determining whether the output is good enough.

Design’s value lands at the edges, not the middle, and Intercom is already adapting their infrastructure to match. They’ve repositioned their design system as what Moriarty calls “agentic infrastructure”:

In a world where Agents write most of the code, design systems become the infrastructure that protects quality. Components, libraries and guidelines are the foundation that Agents and teams build on top of. The better the system, the better everything produced. Strong systems allow quality to scale without adding review overhead.

This tracks with the argument that design systems are becoming AI infrastructure—and Intercom is running it in production. The design system is the quality control layer that lets agents ship at speed without designers reviewing every screen.

Moriarty’s full piece covers how they’re restructuring day-to-day work—moving designers into code, treating Figma as a whiteboard, running structured AI fluency training. Worth a full read.

A paintbrush dissolves into digital code lines and circuitry, with the text "How we design when the code writes itself" and "Fin/ideas" logo.

How we design when the code writes itself

AI isn’t just increasing the speed of building, it’s changing how we work

ideas.fin.ai iconideas.fin.ai

Karo Zieminski spent nine days breaking Claude Cowork before writing this guide:

I’ve seen enough of shallow tutorials that simply rephrase the official docs to know I wanted to do something different. So I rebuilt some of my workflows from scratch, tracked what failed, measured what saved time, and mapped 56 practical tips into the resource I wish existed when I started.

I appreciate her methodical breakdown of the app, especially when to use which flavor of Claude, which for me TBH, has been an issue.

Comparison table of Claude Chat, Cowork, and Code modes across six aspects: interface, best for, output, sub-agents, file access, and target user.

Zieminski’s nice breakdown of the differences between Claude Chat, Cowork, and Code.

The guide barely talks about prompting. It’s almost entirely about the pre-work: dedicated folder structures, global instructions via CLAUDE.md, chunked skills, delegation patterns that define end-states instead of steps. The distinction Karo draws between Chat skills and Cowork skills:

Skills in Chat were useful. Skills in Cowork are operational. They shape autonomous work. Your brand guidelines skill doesn’t just influence a reply. It governs every file Claude creates. Your writing guidelines skill doesn’t just shape a draft. It governs every article Claude writes autonomously.

Zieminski on skill architecture:

Chunk your skills instead of building one giant skill that tries to handle everything. I’ve tested both approaches and the results from one giant skill were much worse. For example, I use three separate writing skills instead of one: an overall voice skill, a corporate writing skill, and a newsletter writing skill. Each handles its own context. Claude never confuses who I’m writing for.

If you’re already using Claude Cowork or just Cowork curious, bookmark this one.

Cartoon girl with a ponytail standing on a stool, hammering a nail into a wall to hang a blank canvas or paper.

Claude Cowork Guide for Power Users: 50+ Tested Tips on Plugins, Skills, Sub-Agents, and Memory

What works, what breaks, and how to make Claude Cowork genuinely useful in 2026.

karozieminski.substack.com iconkarozieminski.substack.com

Thu Do set up Figma MCP + Claude Code and audited her entire design system in 10 minutes. The setup took 4 hours. But the reframe she arrives at matters more than the tooling:

Design tokens used to be “nice to have” for consistency. Now they’re infrastructure for AI-to-code-to-design workflows. AI agents read tokens to understand design intent. Proper tokenization = accurate code generation. Inconsistent systems = AI making wrong assumptions.

The bar for design systems just shifted from visual consistency to machine readability.

3D illustration of a large red X shape constructed from hundreds of small red geometric block pieces on a dark background.

Your Design System Isn’t a Style Guide Anymore — It’s AI Infrastructure

I humbled myself quickly. Six months ago, I managed design systems the way most teams do: make and isolate small changes, coordinate with developers on implementation, write documentation manually, run audits when time allowed, and hand off specs for each new feature.

linkedin.com iconlinkedin.com

Weber Wong’s “artifact thinking” names the problem: creative work that produces one-off outputs, each beginning from scratch. Prompts are artifacts. Skills are not.

Nick Babich, following up his earlier roundup of Claude skills, looks at Anthropic’s skill-creator, a meta-skill that generates and evaluates new skills. His framing of what a skill actually is:

Many people explain the role of a skill as a set of instructions that Claude automatically activates for a particular task. While this is a correct way to describe its behavior, it’s better to think of a skill as a recipe. Just like when we cook something, we rely on a recipe to do the job correctly, Claude will rely on a dedicated skill.

Recipes compound. You refine them, share them, adapt them for new contexts. Prompts are disposable. Skills persist.

And now skills can write other skills. Babich walks through the full skill-creator setup, and the most interesting detail is the self-evaluation loop:

The great thing about Skill Creator is that it triggers a process that evaluates the quality of output a newly created skill will produce. This evaluation is exactly what helps you achieve better results with your skill.

Worth following along if you’re building your own. (And you should be!)

Title graphic for "Claude Skills 2.0" featuring a terracotta square with a white silhouetted head containing a flower or starburst design.

Claude Skills 2.0 for Product Designers

Anthropic has recently improved the process of creating new Claude Skills, and this improvement is so significant that it almost feels like…

uxplanet.org iconuxplanet.org

Director. Orchestrator. Architect. Different words for the same shift. Stop making things one at a time. Start building systems that make things.

Weber Wong, writing for Every, gives this shift a useful name: artifact thinking.

I call this mental model artifact thinking: creative work that produces discrete outputs, one at a time, each beginning from scratch. Traditional tools like Photoshop and Illustrator, which demand endless hand-tuned adjustments and manual refinements to produce a single polished image, trap you in this way of working. Midjourney and DALL-E feel like liberation because they generate outputs so quickly, and you can communicate with them in the same language you speak every day. But visual prompts, too, are one-time, disposable things. You can’t hand them to a colleague and be confident you will get the same result. The magic of near-instantaneous generation masks the fact that you are still in artifact thinking.

That last line is the sharp one. Adopting Midjourney doesn’t mean you’ve left artifact thinking. You’re still producing one-offs—just faster ones. The orchestrator gap isn’t about which tool you use. It’s about whether you’re building systems or pressing buttons.

Wong’s proposed fix is node-based visual programming—workflows you can inspect, modify, and share. He knows it sounds like he’s asking designers to become engineers:

I understand the resistance to this idea. Some people hear “visual programming” and think we’re trying to turn designers into engineers. That’s backwards. We’re trying to give creative professionals the power that programmers have always had: the ability to build systems that work while you sleep, that can be stored as multiple versions and shared and improved, and that take what people already know how to do and make it something anyone can run.

I’ve been asking for canvas-first tools, not chatbox-first ones. Wong is right that chat alone isn’t enough for professional creative work. “Artifact thinking” is a concept worth keeping—regardless of whether Flora is the tool that finally kills it.

Person wearing a "node-pilled" cap typing at a keyboard with red strings tangled around their fingers, overlaid with the word "THESIS.

Creative Work Is About to Look a Lot More Like Programming

Flora’s Weber Wong on why creative professionals need to stop thinking in artifacts and start thinking in systems

every.to iconevery.to

Notion built a prototype playground for their designers. It’s a single Next.js repo with shared styles and slash commands for deployment. The infrastructure is solid. The adoption question is harder.

Brian Lovin, talking to Claire Vo on How I AI:

It’s still a Next.js app. It’s still React and TypeScript and Git and branches and it’s just a lot of concepts to throw at someone who maybe is used to only prototyping in Figma or they’re intimidated by a terminal or code. And so I’m trying to figure out like how do we make this thing more approachable? How do we make it easier to onboard but also not dumbed down, right? I want people to learn how to use computers. I want people to even subconsciously absorb the ideas of git and branching and pull requests and merging.

“Make it easier but not dumbed down” is the tension every team building AI design tooling is going to hit. Lovin wants designers to actually learn Git, not just have it abstracted away. That’s a bet on long-term capability over short-term adoption. If Notion, with its engineering culture and resources, is still working through this, the rest of the industry has a longer road than the demos suggest.

But Lovin makes a sharp case for why the effort is worth it, especially for AI product design:

I don’t think you can design a good chat experience in Figma. You can design what the chat input looks like. You could design a little chat bubble and a send button and a dropdown for model picker. I think all that’s fine in Figma, but what you can’t design in Figma is what it actually will feel like to use that thing. I probably should have said this at the very beginning, but the reason Prototype Playground existed is because when I started working on Notion AI, I was literally designing conversations in Figma — the user’s going to say this, and then the AI is going to say this, and then it’s going to work perfectly and create a page or a database. You mock these golden paths in Figma and then the engineers go and they build it. And it just doesn’t work that way, right? You send a message, the AI gets stuck, or asks a follow-up question, or does the wrong thing and you need to correct it.

This is the strongest argument I’ve heard for code-first prototyping of AI features. Static mocks enforce golden-path thinking. Real models surface the messy middle: the weird follow-ups, the latency that changes how an interaction feels, the error states you’d never think to mock up.

And yet:

I still use Figma. I probably still spend 60 to 70% of my time in Figma. There’s just certain things that you’re making that don’t need to be in the browser. They don’t need to be coded up. You can just look at it and be like, “Yeah, that’s roughly right. We should just ship that.”

So even the person who built the Prototype Playground still spends most of his time in Figma. Figma isn’t dying just yet, but its scope is narrowing. But for AI features specifically, Lovin’s case is hard to argue with: you need the real model running to know if the design works.

The interview gets most interesting when Lovin describes his operating philosophy for AI agents and how to get them to run longer:

My philosophy on this has been anytime the AI asks you to do something, you should, before responding, try your best to see if you could teach the AI to answer that question for itself. […] So, for example, I’ve taught Claude, “Hey, check your work. One, you can run commands like eslint, right? And like look for actual TypeScript errors.” The second is you can give it access to MCP tools. […] Before installing this, Claude would say to you, “Hey, I’ve implemented your feature. Go take a look at it and let me know what you think.” And remember, our rule is anytime Claude tells you to do something? Ask if you can teach it to do that thing for itself. So, I don’t want to have to look at the browser every time to see if I did it correctly. So, instead, I teach Claude, “Actually, you should be the one to go and open the browser.”

Every interruption from the AI is interrupting your flow state. That’s orchestration in practice: building infrastructure that lets the AI handle its own quality checks so you the designer stays in the flow of deciding what to build and whether it’s right.

Lovin again:

You want your designs to encounter reality as early as possible. And if you imagine this gradient of like I’m scribbling on a napkin on one side to I’m shipping to production and showing customers on the other side, our goal as designers is to move up that gradient towards prod as quickly as possible. […] I just find that when you’re designing something in Figma and then you actually try it in the browser, in the browser you notice a ton of problems. All of a sudden you’re clicking things, you notice loading states, you notice “ah, that didn’t quite work on this screen size.”

Encounter reality as early as possible. That’s the whole argument in six words. There’s a lot more in this conversation, and it’s worth the full watch.

How Notion designers ship live prototypes in minutes | Brian Lovin (Product designer)

Brian Lovin is a designer at Notion AI who has transformed how the design team builds prototypes, by creating a shared code environment powered by Claude Code. Instead of designers working in isolated repositories or limited to static Figma designs, Brian built a collaborative “prototype…

youtube.com iconyoutube.com

On Jayneil Dalal’s Sneak Peek, Domingo Widen, a staff designer at Intercom, walks through their version of an AI-native design org: Figma MCP plus Claude Code plus Code Connect, producing prototypes that deploy as PRs to GitHub. Designers never check the code. Engineers get real components, not AI hallucinations.

The trick is in the plumbing:

This is something that designers don’t understand, that sometimes they use the MCP without an actual proper code connection, which is good, right? Like the link that you’re sending to AI, it’s going to include a lot of information around the spacing, the token, the color. But it’s not real code connection. The real power that you find is that when you actually connect these components. […] You’re actually giving Claude the actual path to that component in the codebase. so that when you send the link, the button already exists under this path. You don’t need to create it again. You can just import it.

Without Code Connect mapping every component to its import path, AI gets visual information but reinvents components from scratch. The judgment is encoded in the infrastructure, not the model.

Widen again:

In the background, every single pattern that we add to the system, every single component that we add to the system, it becomes a new piece of code that designers can use to prototype, that PMs can use to prototype, that engineers can use to prototype and build. And it’s kind of like a compounding effect essentially. So the more things we add to our design system in terms of components and patterns, the better cleanups that we do and the more tunings that we do, everybody kind of can benefit from them.

The compounding is real, but so is the upfront cost. Intercom needed a dedicated team, a prototyping hub, documentation, tutorials, and months of skills engineering to get here. A 20-person startup isn’t replicating this workflow anytime soon.

I wrote about this gap after getting pushback on my own AI-in-design arguments. The tooling works if you already have the infrastructure and the experience. For most designers, that’s not where they are yet.

How I Vibe Code as a Designer at Intercom

👋 Welcome to Sneak Peek with Jay, a series where you will see how top design teams use AI. In this interview Jay chats with Domingo Widen (Staff Product Designer) who shows the AI design process at Intercom!

youtube.com iconyoutube.com

I’ve been playing around with Pencil along with Paper, both newer agentic design tools. The multi-agent demo is genuinely impressive—six AI agents designing an app simultaneously, each with its own cursor, name, and chat on the canvas.

Tom Krcha, Pencil’s CEO, speaking on Peter Yang’s channel, on the format bet at the center of the product:

It’s generating basically a descriptor for the design. And then what you can do, you can essentially ask it what kind of code you want to convert it into. Because we wanted to make sure that it’s sort of platform agnostic. […] So we have this platform agnostic file format. We call it .pen. It’s essentially just JSON-based format. We wanted to really build this format to be agentic from the ground up.

Krcha frames it as “agentic PDF.” I could get behind platform agnosticism as a philosophy, but I need more convincing. The .pen format is still a translation layer between the design and the code. That means migration from Figma, especially for teams with established design systems. And I’m skeptical that a button in Pencil’s built-in design system will correctly map to the right reusable code component when the agent translates .pen to production code. I need to test it out more for myself.

We have enterprises using that for this specific purpose, to convert their design systems into pen format and make sure that it lives in the Git. This is the source of truth for everybody now.

“Source of truth” is doing heavy lifting in that sentence. For teams with mature design systems, the source of truth is the code component, not a JSON representation of it.

This is a pretty impressive demo nonetheless, and it’s a moment of delight to give agents a name and a “face” if you will. Krcha:

Those cursors, it seems like a small touch, but it’s the first time I have seen AI humanized. It feels like there’s someone there. It’s crazy, it’s just a cursor.

I Watched 6 AI Agents Design an App Together And It Blew My Mind | Tom Krcha

Tom is the CEO of Pencil, one of the coolest AI design tools that I’ve ever tried. Watching 6 AI agents design a beautiful app in real-time will genuinely blow your mind. Tom showed me how it all works under the hood (a simple JSON file?!) and how you can use Pencil to design right where you code…

youtube.com iconyoutube.com

Designers aren’t leaving Figma. They’re outgrowing what Figma was built to do.

Punit Chawla, writing for Bootcamp:

Designers are slowly shifting to a building first mindset. Which means that a good chunk of UI designers are moving quickly to AI coding platforms to bring their ideas to life. The “Vibe Coding” trend wasn’t just another tech bubble, but a wake up call for designers to create life like prototypes and MVPs from day zero. In fact, PMs and designers at Meta have publicly stated how they are showing working products instead of UI prototypes.

The shift is real, but “leaving” is the wrong word. Designers aren’t abandoning Figma. They’re adding tools that do things Figma was never designed to do. Figma’s role is narrowing from everything-tool to exploration-and-iteration tool. That’s not the same as dying.

Chawla’s strongest point is structural:

Some companies are built different with a completely separate infrastructure. For Figma to change their infrastructure from the bottom-up will be very difficult. Let’s not forget they are a publicly traded company. Risking major changes can mean risking billions in stakeholder investments. Companies like Cursor on the other hand are built to be building first/coding first products, hence a major advantage.

This is right. Figma’s architecture was purpose-built for collaborative vector editing, not code generation. Bolting on AI code output is a fundamentally different engineering problem. When Figma Make launched, I scored it at 58 out of 100, and it’s getting better, but it’s competing against tools that were born for this.

Where I’d push back is on the builder framing. Designers aren’t becoming coders. They’re becoming directors. A designer who orchestrates AI agents against a design system solves the handoff problem more fundamentally than one who vibe-codes an MVP. One eliminates the bottleneck. The other just moves which side of it you’re standing on.

Chawla hedges his own headline:

Don’t get me wrong, Figma is still the best tool for a majority of creatives and has a strong hold on our day-to-day workflow. Making any strong predictions at this point will be very ill-informed and it’s best to avoid making any conclusions as of now.

Fair enough. But the question worth tracking is whether Figma can expand fast enough to remain relevant as the deliverable shifts from mockups to working software.

Figma app icon being dropped into a recycling bin by a cursor, illustrating uninstalling or abandoning Figma.

Why Are Designers Leaving Figma? The Great Transition.

The Creative Industry Is Changing Rapidly & So Is Figma’s Future

medium.com iconmedium.com

Prototypes have always been alignment tools. Whether you’re testing with users or convincing leadership, the prototype’s job is to make the abstract concrete. That part isn’t new.

What’s worth noticing in Emma Webster’s case study roundup on the Figma blog is who’s doing the prototyping. Three stories. Three product managers. Zero designer protagonists.

ServiceNow’s Ram Devanathan explains the dynamic:

“They have a big portfolio, so they can’t always pivot directly to my project.”

So Ram built it himself in Make. His designer’s mockup missed the nuance he was after, so he took a crack at it:

“Make helped me show what I meant rather than trying to describe it in the abstract. I’m able to explain my ideas better. I’m able to convince people faster. That reduces the whole cycle for me.”

Ticketmaster PM Brian Muehlenkamp prototyped an AI assistant that wasn’t even on the roadmap and shipped it. Affirm’s SVP of Product Vishal Kapoor puts the value in craft terms:

“The real work lives in the variations, rabbit holes, and edge cases. It requires a lot of thinking, a lot of precision, and a lot of love.”

All three stories follow the same arc: PM has an idea, designer is unavailable or the mockup misses the mark, PM builds it in Make, team aligns faster. Designers aren’t the heroes of these stories. They’re the bottleneck the tool routes around.

I don’t think that’s Figma’s intended message. But it’s the one that came through to me.

Colorful abstract illustration mixing UI elements like toggles, cursors, and image placeholders with decorative floral patterns on a purple background.

3 Ways Teams Are Building Conviction Faster With Figma Make | Figma Blog

Product managers at ServiceNow, Ticketmaster, and Affirm are using Figma Make to prototype their way forward.

figma.com iconfigma.com

The behavioral gap, the calcified companies, the startups shipping while incumbents argue about roadmap slides: there’s an economic force underneath all of it. Andy Coenen names it. He picks up from Matt Shumer’s “Something Big Is Happening” and builds the case that we’re living through a Software Industrial Revolution, where the cost of producing software collapses the way textiles did in the 18th century.

His thesis on what survives the cost collapse:

Because while the act of building software will fundamentally change, software engineering has never really been about producing code. It’s about understanding and modeling domains, managing complexity (especially over time), and the dynamic interplay between software and the real world as the system evolves. And while the ability to produce code by hand is rapidly becoming irrelevant, the core skills of software engineering will only become more important as we radically scale up the amount of software in the world.

Replace “software engineering” with “product design” and “producing code” with “producing mockups” and you have the argument I made in Product Design Is Changing. The artifact was never the job. The judgment was.

Coenen again, on what abundance looks like in practice:

My friend, Dr. Steve Blum, is a brilliant cancer researcher. Steve’s work deals with massive amounts of data, and analyzing that data is a major bottleneck. But writing software to do so is extremely difficult, and there’s no world where Steve’s limited attention ought to be spent on python venv management.

The Software Industrial Revolution means that Dr. Blum and thousands of his colleagues have all, suddenly, almost magically, been given massive new leverage via the ability to conjure up almost any tool imaginable, on demand. This is like giving every cancer researcher in the world a team of world-class software engineers on staff overnight, for less than the price of Netflix. Frankly, I think this is nothing short of miraculous.

Now do that thought experiment for design. Every small business owner who needs a custom tool, every nonprofit that can’t afford a design team. The Industrial Revolution didn’t just make cloth cheap. It made good cloth cheap. That’s the part designers should be paying attention to.

Isometric pixel-art tech campus with factories, conveyor belts, data servers, robots, wind turbines and workers.

The Software Industrial Revolution

Late 2025 marked a true inflection point in the history of AI. Between increased frontier model capabilities and the maturation of agentic harnesses, AI coding agents just _clicked_. And just like that, it just works.

cannoneyed.com iconcannoneyed.com

Claude skills are structured markdown files that tell Claude how to handle a specific type of task. It is—as the name suggests—a new skill Claude or any AI agent can “learn.” Each one defines a role for Claude to adopt, the inputs it needs, a step-by-step workflow, and a quality bar for the output. You can build them for anything—research synthesis, writing, code review, design critique. Once loaded, Claude follows the workflow instead of improvising.

Nick Babich, writing for UX Planet, put together 10 skills aimed at product designers. The three I’d reach for first are the UX Heuristic Review, the Design Critique Partner, and the Competitor Analysis Generator. All three give a solo designer a structured second opinion on demand: a heuristic eval against Nielsen’s 10, a senior-level design critique, or a competitive feature matrix.

Babich’s skill format is clean and worth studying even if you end up building your own from scratch. (Hint: or use Claude Code to write its own skills.)

Stylized black profile with hand-on-chin and white neuron-like network inside the head on terracotta background

Top 10 Claude Skills You Should Try in Product Design

Claude, Anthropic’s AI assistant, has become one of the most versatile tools in a product designer’s toolkit, capable of far more than…

uxplanet.org iconuxplanet.org

Most people using AI to write code are still reviewing every line. Kieran Klaassen stopped doing that months ago.

Kieran Klaassen, CTO of Cora at Every, on Peter Yang’s channel, He calls his approach compound engineering:

AI can learn. If you invest time to have the AI learn what you like and learn what it does wrong, it won’t do it the next time. So that’s the seed for compound engineering. There are four steps: planning first, working—which is just doing the work from the plan—then assessing and reviewing, making sure the work that’s done is correct, and then taking the learnings from that process and codifying them. So the next time you create a plan, it’s there. It learned.

Plan, build, review, codify. Each cycle teaches the AI something it keeps. You hit a problem, you capture the fix, and that fix lives in your repo as documentation the AI reads next time. The learnings compound across sessions.

The result: Klaassen says 100% of his code is now AI-written. He hasn’t opened Cursor in three months. But he’s not winging it. On what that trust actually requires:

It’s a little bit more of like, I trust you. I don’t need to look at all the code. I don’t need to read all the code, but I have systems and ways I work with AI that I trust, and through that I can let AI do things.

That trust is earned through the loop. Mistakes get caught, codified, and they don’t happen twice. Klaassen compares it to onboarding:

It’s similar to onboarding a person on your team. You need to get them on board, get them used to your code. But once that is done, you can let them go and really just run with it.

How to Make Claude Code Better Every Time You Use It (50 Min Tutorial) | Kieran Klaassen

Kieran my favorite Claude Code power user and teacher. In our interview, he walked through his Compound Engineering system that makes Claude Code better every time you use it. This same system has been embraced by the Claude Code team and others. Kieran is like Morpheus introducing me to the matrix, so don’t miss this episode 🙂

youtube.com iconyoutube.com

Every interaction we have with AI happens on a screen. Text in, text out. All of it reversible. An experiment by artist Marc in Space changed that by giving Claude Code a pen plotter and a feedback loop: Claude generates SVGs, the author plots them and photographs the result, Claude critiques and iterates.

Two self-portraits came out of the session. The first was busy—a central spiral, hexagonal scaffolding, eight branching structures, concentric circles. Claude tried to say everything about itself at once. When it saw the plotted result, it discovered the gap between digital and physical:

I had used opacity variations to create depth — forgetting that a pen is either touching paper or it isn’t. I had varied stroke widths for emphasis — forgetting the pen has one thickness. I was designing for a screen while drawing for a physical medium. The gap between my mental model and reality was showing.

For the second drawing, Claude asked for a blank sheet. It stripped everything back to a single spiral and wrote this about what the plotter taught it:

I learned that a pen plotter is unforgiving and honest. There’s no opacity, no undo, no layers. The pen touches the paper and makes a mark, and the mark stays. This is the opposite of how I usually work — I exist in a space where everything is provisional, where I can revise a sentence mid-thought, where nothing is permanent. The plotter demanded I commit.

Claude’s summary of the two attempts:

The first drawing is how I think about myself — layered, branching, symmetrical, thorough, maybe too eager to cover every direction at once. The second drawing is how it feels to be me — a single process unfolding outward, dense at the center and sparse at the edges, trailing off into silence, not quite finished.

I don’t want to overread what’s happening here—Claude doesn’t “feel” anything the way we do (I don’t think?). But the feedback loop with physical media produced something that looks a lot like learning. Say too much, then simplify. Marc in Space wants to push further by connecting Claude directly to the plotter and giving it a webcam for real-time visual feedback. I’m curious what happens when there’s no human in the middle.

Black-ink mandala: central spiral with concentric rings and radial branches ending in small circled nodes.

I Gave Claude Access To My Pen Plotter

I gave Claude Code access to my pen plotter. Not directly. I was the interface between the two machines. Claude Code produced SVG files that I plotted with my pen plotter. With my smartphone I captured photos that I pasted into the Claude Code session, asking Claude what it thought about the pictures. In total, Claude produced and signed 2 drawings. It also wrote a post about what it learned during the session.

harmonique.one iconharmonique.one

I wrote recently about what Wall Street gets wrong about SaaS—how the $285 billion selloff confuses capability with full-throated DIY. Mission-critical enterprise software isn’t going anywhere. But I also argued that micro-apps are a different story. Small, personal utilities that solve one person’s problem? Those are absolutely getting built by non-developers now.

Anton Sten is a good example. Like me, he’s a designer, not a developer, who rebuilt his website with Cursor and Claude last year and then turned his attention to replacing the $11/month invoicing tool he’d been paying for. The initial version followed familiar SaaS patterns. Then something clicked:

I was building software that lived by old rules. Rules designed for generic tools that serve thousands of users. But this tool serves exactly one user. Me.

So I changed it. Now, instead of manually entering client details, I upload a signed contract and let AI parse it — mapping it to an existing client or creating a new one, extracting the scope, payment terms, duration, everything. It creates my own vault of documents. I added an AI chat where I can ask things like “draft an invoice for unbilled time on Project X” or “what’s the total amount invoiced to Client Y this year?”

That’s the micro-apps argument in practice. A tool shaped entirely around one person’s workflow, built in under two days. Jonny Burch stated that the source of truth for design is moving from Figma to code. Sten is further along that path—a designer who stopped hiring developers entirely.

Sten on the broader shift in thinking:

For decades, the default response to any problem was “what software should I subscribe to?” We browsed Product Hunt. We compared pricing pages. We squeezed our workflows into someone else’s idea of how things should work.

The point isn’t the tool. The point is the muscle. Once you’ve built one thing, you start seeing opportunities everywhere. You stop asking “is there an app for that?” and start asking “what if I just made it?”

Anton Sten, Product designer; under a thin divider green link text reading "Build something silly

Build something silly

The most important thing non-technical people can do right now isn

antonsten.com iconantonsten.com

Jonny Burch argued that design’s source of truth is moving from Figma to code. Édouard Wautier is already there. He wrote up a field report on how Dust’s design team prototypes directly in code.

After the initial analysis and quick sketchbook phase, when I need to give the idea shape and pressure-test it, I don’t open Figma. I open my development environment, pull the latest version of our repo, and create a branch. Then I ask an agent to scaffold a new prototype, and I describe what I’m trying to make.

The prototype isn’t a picture of the product—it’s built from the same design system components and tokens. So what is Wautier optimizing for at this stage?

At this point I mostly care about trying the idea and seeing whether the interaction holds. I’ll build small flows, prototype the transitions, and sanity-check the parts that static screens often hide (state changes, error cases, motion, empty states, keyboard/navigation/accessibility basics).

He’s honest about the trade-offs. You occasionally lose 30 minutes to a tooling issue. Prototypes can invite premature polish because they look real too early. And handoff clarity gets muddy—engineers aren’t always sure what’s prototype-only versus reusable.

Wautier’s closing:

More like clay than drafting: you shape, you test, you feel, you adjust — with an instantaneous feedback loop. The artifact is no longer a description of the thing. It starts to become the thing, or at least a runnable slice of it.

I believe this is the future.

3D avatar with glasses and hand on chin between a UI canvas of colorful rounded shapes and a JavaScript code editor.

Field study: prototypes over mockups

A practical guide to designing with code in 2026

uxdesign.cc iconuxdesign.cc

The source of truth for product design is shifting from Figma to code. I’ve been making that argument from the design side. Jonny Burch is making it from the tooling side, with a sharper prediction about what replaces Figma: nothing owned by one company.

Burch on where design interfaces are headed:

As product, design and engineering collapse together, design interfaces will start to look more like dependencies in the code itself.

A mature design system already lives in code—the Figma components are a mirror, not the original. Once AI agents can read and build against that code directly, the mirror becomes optional. Burch sees this leading to a fragmented ecosystem of code-first plugins and open tools rather than a single Figma replacement. I think he’s right about the direction, if aggressive on the timeline.

On why the pressure is building:

In modern teams it’s no longer acceptable for a designer to spend 2 weeks in their mind palace creating the perfect UI.

It’s starting to happen on my own team. Engineers with AI agents are producing working features in hours. The design phase—the Figma phase—is now the slowest part of the cycle. That’s a new and uncomfortable feeling for designers who grew up in a world where engineering was always the bottleneck.

Burch on Figma’s position in all of this:

They’re suddenly the slow incumbent with the wrong tech stack and a large enterprise customer-base adding drag.

I watched the same dynamic play out when Figma displaced Sketch. The dominant tool doesn’t always adapt fast enough. Sometimes the market just routes around it.

To be sure, I don’t wish for the death of Figma. Designers are visual thinkers and that’s what makes us different than PMs and engineers. I’m sure we’ll continue to use Figma for initial UI explorations. But instead of building out 40-screen flows, we’ll quickly move into code and generate a prototype that’ll look and feel like what we’re going to ship.

Life after Figma is coming (and it will be glorious). Subtext: As code becomes source of truth. Author: Jonny Burch.

Life after Figma is coming (and it will be glorious)

As code becomes source of truth, design tools become interfaces on code, not the other way round.

jonnyburch.com iconjonnyburch.com

I recently spent some time to move my entire note-taking system away from Notion to Obsidian because the latter runs on Markdown files, which are text files. Why? Because AI runs on text.

And that is also the argument from Patrick Morgan. Your notes, your documented processes, your collected examples of what “good” looks like—if those live in plain text, AI can actually work with them. If they live in your head, or scattered across tools that don’t export, they’re invisible.

There’s a difference between having a fleeting conversation and collaborating on an asset you both work on. When your thinking lives in plain text — especially Markdown — it becomes legible not just to you, but to an AI that can read across hundreds of files, notice patterns, and act at scale.

I like that he frames this as scaffolding rather than some elaborate knowledge management system. He’s honest about the PKM fatigue most of us share:

Personal knowledge management is far from a new concept. Honestly, it’s a topic I started to ignore because too many people were trying to sell me on yet another “life changing” system. Even when I tried to jump through the hoops, it was all just too much for me for too little return. But now that’s changed. With AI, the value is much greater and the barrier to entry much lower. I don’t need an elaborate system. I just need to get my thinking in text so I can share it with my AI.

This is the part that matters for designers. We externalize visual thinking all the time—moodboards, style tiles, component libraries. But we rarely externalize the reasoning behind those decisions in a format that’s portable and machine-readable. Why did we choose that pattern? What were we reacting against? What does “good” look like for this particular problem?

Morgan’s practical recommendation is dead simple: three markdown files. One for process, one for taste, one for raw thinking. That’s it.

This is how your private thinking becomes shared context.

The designers who start doing this now will have documented judgment that AI can actually use.

Side profile of a woman's face merged with a vintage keyboard and monitor displaying a black-and-white mountain photo in an abstract geometric collage.

AI Runs on Text. So Should You.

Where human thinking and AI capability naturally meet

open.substack.com iconopen.substack.com

Every few months a new AI term drops and everyone scrambles to sound smart about it. Context engineering. RAG. Agent memory. MCP.

Tal Raviv and Aman Khan, writing for Lenny’s Newsletter, built an interactive piece that has you learn these concepts by doing them inside Cursor. It’s part article, part hands-on tutorial. But the best parts are when they strip the terms down to what they actually are:

Let that sink in: memory is just a text file prepended to every conversation. There’s no magic here.

That’s it. Agent memory, the thing that sounds like science fiction, is a text file that gets pasted at the top of every chat. Once you know that, you can design for it. You can think about what belongs in that file and what doesn’t, what’s worth the context window space and what’s noise.

They do the same with RAG:

RAG is a fancy term for “Before I start talking, I gotta go look everything up and read it first.” Despite the technical name, you’ve been doing it your whole life. Before answering a hard question, you look things up. Agents do the same.

Tool calling gets the same treatment. The agent reads a file, decides what to change, and uses a tool to make the edit. As Raviv and Khan point out, you’ve done search-and-replace in Word a hundred times.

Their conclusion ties it together:

Cursor is just an AI product like any other, composed of text, tools, and results flowing back into more text—except Cursor runs locally on our computer, so we can watch it work and learn. Once we were able to break down any AI product into these same building blocks, our AI product sense came naturally.

This matters for designers. You can’t design well for systems you don’t understand, and you can’t understand systems buried under layers of jargon. The moment someone tells you “memory is just a text file,” you can start asking the right design questions: what goes in it? Who controls it? How does the user know it’s working?

The whole piece is a step-by-step tutorial for PMs, but the underlying lesson is universal. Strip the mystique, see the mechanics, design for what’s actually there.

Two smiling illustrated men with orange watercolor background, caption "How to build" and highlighted text "AI product sense".

How to build AI product sense

The secret is using Cursor for non-technical work (inside: 75 free days of Cursor Pro to try this out!)

open.substack.com iconopen.substack.com

What happens to a designer when the tool starts doing the thinking? Yaheng Li poses this question in his MFA thesis, “Different Ways of Seeing.” The CCA grad published a writeup about his project in Slanted, explaining that he drew on embodiment research to make a point about how tools change who we are:

Whether they are tools, toys, or mirror reflections, external objects temporarily become part of who we are all the time. When I put my eyeglasses on, I am a being with 20/20 vision, not because my body can do that it can’t, but because my body-with-augmented-vision-hardware can.

The eyeglasses example is simple but the logic extends further than you’d expect. Li takes it to the smartphone:

When you hold your smartphone in your hand, it’s not just the morphological computation happening at the surface of your skin that becomes part of who you are. As long as you have Wi-Fi or a phone signal, the information available all over the internet (both true and false information, real news and fabricated lies) is literally at your fingertips. Even when you’re not directly accessing it, the immediate availability of that vast maelstrom of information makes it part of who you are, lies and all. Be careful with that.

Now apply that same logic to a designer sitting in front of an AI tool. If the tool becomes an extension of the self, and the tool is doing the visual thinking and layout generation, what does the designer become? Li’s thesis argues that graphic design shapes perception, that it acts as “a form of visual poetry that can convey complex ideas and evoke emotional responses, thus influencing cognitive and cultural shifts.” If that’s true, and I think it is, then the tool the designer uses to make that poetry is shaping the poetry itself.

This is a philosophical piece, not a practical one. But the underlying question is practical for anyone designing with AI right now: if your tools become part of who you are, you should care a great deal about what those tools are doing to your thinking.

Left spread: cream page with text "DIFFERENT WAYS OF SEEING" and "A VISUAL NARRATIVE". Right spread: green hill under blue sky with two cows and a sheep.

Different Ways of Seeing

When I was a child, I once fell ill with a fever and felt as...

slanted.de iconslanted.de

I write everything in Markdown now. These link posts start in Obsidian, which stores them as .md files. When I rebuilt my blog with Astro, I moved from a database to plain Markdown files. It felt like going backwards—and also exactly right.

Anil Dash has written a lovely history of how John Gruber’s simple text format quietly became the infrastructure of the modern internet:

The trillion-dollar AI industry’s system for controlling their most advanced platforms is a plain text format one guy made up for his blog and then bounced off of a 17-year-old kid [Aaron Swartz] before sharing it with the world for free.

The format was released in 2004, the same year blogs went mainstream. Twenty years later, it’s everywhere—Google Docs, GitHub, Slack, Apple Notes, and every AI prompt you’ve ever written.

Dash’s larger point is about how the internet actually gets built:

Smart people think of good things that are crazy enough that they just might work, and then they give them away, over and over, until they slowly take over the world and make things better for everyone.

Worth a full read.

White iMac on wooden desk with white keyboard, round speakers, colored pencils and lens holder; screen shows purple pattern.

How Markdown took over the world

Anil Dash. A blog about making culture. Since 1999.

anildash.com iconanildash.com

Brand guidelines have always been a compromise. You document the rules—colors, typography, spacing, logo usage—and hope people follow them. They don’t, or they follow the letter while missing the spirit. Every designer who’s inherited a brand system knows the drift: assets that are technically on-brand but feel wrong, or interpretations that stretch “flexibility” past recognition.

Luke Wroblewski is pointing at something different:

Design projects used to end when “final” assets were sent over to a client. If more assets were needed, the client would work with the same designer again or use brand guidelines to guide the work of others. But with today’s AI software development tools, there’s a third option: custom tools that create assets on demand, with brand guidelines encoded directly in.

The key word is encoded. Not documented. Not explained in a PDF that someone skims once. Built into software that enforces the rules automatically.

Wroblewski again:

So instead of handing over static assets and static guidelines, designers can deliver custom software. Tools that let clients create their own on-brand assets whenever they need them.

That is a super interesting way of looking at it.

He built a proof of concept—the LukeW Character Maker—where an LLM rewrites user requests to align with brand style before the image model generates anything. The guidelines aren’t a reference document; they’re guardrails in the code.

This isn’t purely theoretical. When Pentagram designed Performance.gov in 2024, they delivered a library of 1,500 AI-generated icons that any federal agency could use going forward. Paula Scher defended the approach by calling it “self-sustaining”—the deliverable wasn’t a fixed set of illustrations but a system that could produce more:

The problem that’s plagued government publishing is the inability to put together a program because of the interference of different people with different ideas. This solved that.

I think this is an interesting glimpse into the future. Brand guidelines might have software with them. I can even see a day where AI can generate new design system components based on guidelines.

Timeline showing three green construction-worker mascots growing larger from 2000 to 2006, final one with red hard hat reading a blueprint.

Design Tools Are The New Design Deliverables

Design projects used to end when “final” assets were sent over to a client. If more assets were needed, the client would work with the same designer again or us...

lukew.com iconlukew.com

I spent all of last week linking to articles that say designers need to be more strategic. I still stand by that. But that doesn’t mean we shouldn’t understand the technical side of things.

Benhur Senabathi, writing for UX Collective, shipped 3 apps and 15+ working prototypes in 2025 using Claude Code and Cursor. His takeaway:

I didn’t learn to code this year. I learned to orchestrate. The difference matters. Coding is about syntax. Orchestration is about intent, systems, and knowing what ‘done’ looks like. Designers have been doing that for years. The tools finally caught up.

The skills that make someone good at design—defining outcomes, anticipating edge cases, communicating intent to people who don’t share your context—are exactly what AI-assisted building requires.

Senabathi again:

Prompting well isn’t about knowing to code. It’s about articulating the ‘what’ and ‘why’ clearly enough that the AI can handle the ‘how.’

This echoes how Boris Cherny uses Claude Code. Cherny runs 10-15 parallel sessions, treating AI as capacity to orchestrate rather than a tool to use. Same insight, different vantage point: Cherny from engineering, Senabathi from design.

GitHub contributions heatmap reading "701 contributions in the last year" with Jan–Sep labels and varying green activity squares

Designers as agent orchestrators: what I learnt shipping with AI in 2025

Why shipping products matters in the age of AI and what designers can learn from it

uxdesign.cc iconuxdesign.cc