Figma is opening its canvas as a writeable surface for AI agents. Matt Colyer, product director at Figma, on why this matters:
Design decisions—from color palettes and button padding, to typography and interactivity—have always defined how products take shape. No matter how small, those decisions add up. They make your product and user experience stand out from the rest. To date, AI agents haven’t had this context, which is why so many designs created by AI often feel unfamiliar and generic.
The fix is beefing up skills files, by encoding a team’s design decisions, conventions, and sequencing rules. Agents read them before they touch the canvas. The use_figma tool lets Claude Code, Codex, and other MCP clients create and update assets tied to your design system. Colyer on what that changes:
Your conventions are no longer static documentation. They become rules agents follow as they work—applied through components, variables, and the structure you’ve already defined.
The detail worth paying attention to is what Colyer describes as a self-healing loop. When an agent generates a screen, it screenshots the result, checks it against the design system, and iterates. Because it’s working with real components and auto layout, those corrections compound through the system itself, not just the pixels on screen.
It’s free during beta, with plans to move to a paid API. Figma is finally joining the party as Subframe, Paper, and Pencil all offer this workflow already.


