Skip to content

49 posts tagged with “tools”

Chris Butler wrestles with a generations-old problem in his latest piece: new technologies shortcut the old ways of doing things and therefore quality takes a nosedive. But is it different this time with the tools available to us today?

While design is more accessible than ever, with Adobe experimenting with chat interfaces and Canva offering pro-level design apps for free, putting a tool into the hands of someone doesn’t mean they’ll know how to wield it.

Anyone can now create something that looks professional, that uses modern layouts and typography, that feels designed. But producing something that feels designed does not mean that any design has happened. Most tools don’t ask you what you want someone to do. They don’t force you to make hard choices about hierarchy and priority. They offer you options, and if you don’t already understand the fundamentals of how design guides attention and serves purpose, you’ll end up using too many of them to no end.

Butler concludes that as designers, we’re in a bind because “the pace of change is only accelerating, and it is a serious challenge to designers to determine how much time to spend keeping up.”

You can’t build foundational knowledge while chasing the new. But you can’t ignore the new entirely, or you’ll fall behind. So you split your time, and both efforts can suffer. The fundamentals remain elusive because you’re too busy keeping up. The tools remain half-learned because you’re too busy teaching [design fundamentals to clients].

Butler—nor I—know if there’s a good solution to this problem. Like I said at the start, this is an age-old problem. Friction is a feature, not a bug.

This is just the reality of working in a field that sits at the intersection of human behavior and technological change. Both move, but at different speeds. Human attention, cognition, emotion — these things change slowly, if at all. Technology changes constantly. Design has to navigate both.

And while Butler’s essay never explicitly mentions AI or AI tools, it’s strongly implied. Developers using AI tools to code miss out on the fundamentals of building software. Designers (or their clients) using AI to design face the issues brought up here. Those who use AI to accelerate what they already know, that seems to be The Way.

The Fundamentals Problem

A few months ago, a client was reviewing a landing page design with my team. They had created it themselves using a page builder tool — one of those

chrbutler.com iconchrbutler.com

I’ve been a big fan of node-based UIs since I first experimented with Shake in the early 2000s. It’s kind of weird to wrap your head around, especially if you’re used to layers in Photoshop or Figma. The easiest way to think about nodes is to rotate the layer stack 90-degrees. Each node has inputs on the left, a distinct process that it does to the input, and outputs stuff on the right. You connect up multiple nodes to process assets to form your final composition. Popular apps with node-based workflows today include Unreal Engine (Blueprints), DaVinci Resolve (Fusion and Color), and n8n.

ComfyUI is another open source tool that uses the same node graph architecture. Made in 2023 to add some UI to the visual generative AI models like Stable Diffusion appearing around that time, it’s become popular among artists to wield the plethora of image and video gen AI models.

Fast-forward to last week, when Figma announced they had acquired Weavy, a much friendlier and cloud-based version of ComfyUI.

Weavy brings the world’s leading AI models together with professional editing tools on a single, browser-based canvas. With Weavy, you can choose the model you want for a task (e.g. Seedance, Sora, and Veo for cinematic video; Flux and Ideogram for realism; and Nano-Banana or Seedream for precision) and compose powerful primitives using generative AI outputs and hands-on edits (e.g. adjusting lighting, masking an object, color grading a shot). The end result is an inspiring environment for creative exploration and a flexible media pipeline where every output feeds the next.

This node-based approach brings a new level of craft and control to AI generation. Outputs can be branched, remixed, and refined, combining creative exploration with precision and craft. The Weavy team has inspired us with the balance they’ve struck between simplicity, approachability, and power. They’ve also created a tool that’s just a joy to use.

I must admit I had not heard about Weavy before the announcement. I had high hopes for Visual Electric, but it never quite lived up to its ambitions. I proceeded to watch all the official tutorial videos on YouTube and love it. Seems so much easier to use than ComfyUI. Let’s see what Figma does with the product.

Node-based image editor with connected panels showing a man in a rowboat on water then composited floating over a deep canyon.

Introducing Figma Weave: the next generation of AI-native creation at Figma

Figma has acquired Weavy, a platform that brings generative AI and professional editing tools into the open canvas.

figma.com iconfigma.com

In graphic design news, a new version of the Affinity suite dropped last week, and it’s free. Canva purchased Serif, the company behind the Affinity products, last year. After about a year of engineering, they have combined all the products into a single product to offer maximum flexibility. And they made it free.

Of course then, that sparks debate.

Joe Foley, writing for Creative Bloq explains:

…A natural suspicion of big corporations is causing some to worry about what the new Affinity will become. What’s in it for Canva?

Theories abound. Some think the app will start to show adverts like many free mobile apps do. Others think it will be used to train AI (something Canva denies). Some wonder if Canva’s just doing it to spite Adobe. “Their objective was to undermine Adobe, not provide for paying customers. Revenge instead of progress,” one person thinks.

Others fear Affinity’s tools will be left to stagnate. “If you depend on a software for your design work it needs to be regularly updated and developed. Free software never has that pressure and priority to be kept top notch,” one person writes.

AI features are gated behind paid Canva premium subscription plans. This makes sense as AI features have inference costs. As Adobe is going all out with its AI features, gen AI is now table stakes for creative and design programs.

Photo editor showing a man in a green jacket with gold chains against a purple gradient background, layers panel visible.

Is Affinity’s free Photoshop rival too good to be true?

Designers are torn over the new app.

creativebloq.com iconcreativebloq.com

I’ve been on the receiving end of Layer 1226 before and it’s not fun. While I’m pretty good with my layer naming hygiene, I’m not perfect. So I welcome anything that can help rename my layers. Apparently, when Adobe showed off this new AI feature at their Adobe MAX user conference last week, it drew a big round of applause. (Figma’s had this feature since June 2024.)

There’s more than just renaming layers though. Adobe is leaning into conversational UI for editing too. For new users coming to editing tools, this makes a lot of sense because the learning curve for Photoshop is very steep. But as I’ve always said, professionals will also need fine-grained controls.

Writing for CNET, Katelyn Chedraoui:

Renaming layers is just one of many things Adobe’s new AI assistants will be able to do. These chatbot-like tools will be added to Photoshop and Express. They have an emphasis on “conversational, agentic” experiences — meaning you can ask the chatbot to make edits, and it can independently handle them.

Express’s AI assistant is similar to using a chatbot. Once you toggle on the tool in the upper left corner, a conversation window pops up. You can ask the AI to change the color of an object or remove an obtrusive element. While pro users might be comfortable making those edits manually, the AI assistant might be more appealing to its less experienced users and folks working under a time crunch.

A peek into Adobe’s future reveals more agentic experiences:

Also announced on Tuesday is Project Moonlight, a new platform in beta on Adobe’s AI hub, Firefly. It’s a new tool that hopes to act as a creative partner. With your permission, it uses your data from Adobe platforms and social media accounts to help you create content. For example, you can ask it to come up with 20 ideas for what to do with your newest Lightroom photos based on your most successful Instagram posts in the past. 

These AI efforts represent a range of what conversational editing can look like, Mike Polner, Adobe Firefly’s vice president of product marketing for creators said in an interview. 

“One end of the spectrum is [to] type in a prompt and say, ‘Make my hat blue.’ That’s very simplistic,” said Polner. “With Project Moonlight, it can understand your context, explore and help you come up with new ideas and then help you analyze the content that you already have,” Polner said.

Photoshop AI Assistant UI over stone church landscape with large 'haven' text and command bubbles like 'Increase saturation'.

Photoshop’s New AI Assistant Can Rename All Your Layers So You Don’t Have To

The chatbot-like AI assistant isn’t out yet, but there is at least one practical way to use it.

cnet.com iconcnet.com

It’s interesting to me that Figma had to have a separate conference and set of announcements focused on design systems. In some sense it’s an indicator of how big and mature this part of design has become.

A few highlights from my point-of-view…

Slots seems to solve one of those small UX paper cuts—those niggly inconveniences that we just lived with. But this is a big deal. You’ll be able to add layers within component instances without breaking the connection to your design system. No more pre-building hidden list items or forcing designers to detach components. Pretty advanced stuff.

On the code front, they’re making Code Connect actually approachable with a new UI that connects directly to GitHub and uses AI to map components. The Figma MCP server is out of beta and now supports design system guidelines—meaning your agentic coding tools can actually respect your design standards. Can’t wait to try these.

For teams like mine that are using Make, you’ll be able to pull in design systems through two routes: Make kits (generate React and CSS from Figma libraries) or npm package imports (bring in your existing code components). This is the part where AI-assisted design doesn’t have to mean throwing pixelcraft out the window.

Design systems have always been about maintaining quality at scale. These updates are very welcomed.

Bright cobalt background with "schema" in a maroon bar and light-blue "by Figma" text, stepped columns of orange semicircles on pale-cyan blocks along right and bottom.

Schema 2025: Design Systems For A New Era

As AI accelerates product development, design systems keep the bar for craft and quality high. Here’s everything we announced at Schema to help teams design for the AI era.

figma.com iconfigma.com

With Cursor and Lovable as the darlings of AI coding tools, don’t sleep on Claude Code. Personally, I’ve been splitting my time between Claude Code and Cursor. While Claude Code’s primary persona is coders and tinkerers, it can be used for so much more.

Lenny Rachitsky calls it “the most underrated AI tool for non-technical people.”

The key is to forget that it’s called Claude Code and instead think of it as Claude Local or Claude Agent. It’s essentially a super-intelligent AI running locally, able to do stuff directly on your computer—from organizing your files and folders to enhancing image quality, brainstorming domain names, summarizing customer calls, creating Linear tickets, and, as you’ll see below, so much more.

Since it’s running locally, it can handle huge files, run much longer than the cloud-based Claude/ChatGPT/Gemini chatbots, and it’s fast and versatile. Claude Code is basically Claude with even more powers.

Rachitsky shares 50 of his “favorite and most creative ways non-technical people are using Claude Code in their work and life.”

Everyone should be using Claude Code more

Everyone should be using Claude Code more

How to get started, and 50 ways non-technical people are using Claude Code in their work and life

lennysnewsletter.com iconlennysnewsletter.com

Noah Davis writing in Web Designer Depot, says aloud what I’d thought—but never wrote down—before AI, templates started to kill creativity in web design.

If you’re wondering why the web feels dead, lifeless, or like you’re stuck in a scrolling Groundhog Day of “hero image, tagline, three icons, CTA,” it’s not because AI hallucinated its way into the design department.

It’s because we templatified creativity into submission!

We used to design websites like we were crafting digital homes—custom woodwork, strange hallways, surprise color choices, even weird sound effects if you dared. Each one had quirks. A personality. A soul.

When I was coming up as a designer in the late 1990s and early 2000s, one of my favorite projects was designing Pixar.com. The animation studio’s soul—and by extension the soul I’d imbue into the website—was story. The way this manifest was a linear approach to the site, similar to a slideshow, to tell the story of each of their films.

And as the web design industry grew, and everyone needed and wanted a website, from Fortune 500s to the local barber shop, access to well-designed websites was made possible via templates.

Let’s be real: clients aren’t asking for design anymore. They’re asking for “a site like this.” You know the one. It looks clean. It has animations. It scrolls smoothly. It’s “modern.” Which, in 2025, is just a euphemism for “I want what everyone else has so I don’t have to think.”

Templates didn’t just streamline web development. They rewired what people expect a website to be.

Why hire a designer when you can drop your brand colors into a no-code template, plug in some Lottie files, and call it a day? The end result isn’t bad. It’s worse than bad. It’s forgettable.

Davis ends his rant with a call to action: “If you want design to live, stop feeding the template machine. Build weird stuff. Ugly stuff. Confusing stuff. Human stuff.”

AI Didn’t Kill Web Design —Templates Did It First

AI Didn’t Kill Web Design —Templates Did It First

The web isn’t dying because of AI—it’s drowning in a sea of templates. Platforms like Squarespace, Wix, and Shopify have made building a site easier than ever—but at the cost of creativity, originality, and soul. If every website looks the same, does design even matter anymore?

webdesignerdepot.com iconwebdesignerdepot.com

Auto-Tagging the Post Archive

Since I finished migrating my site from Next.js/Payload CMS to Astro, I’ve been wanting to redo the tag taxonomy for my posts. They’d gotten out of hand over time, and the tag tumbleweed grew to more than 80 tags. What the hell was I thinking when I had both “product design” and “product designer”?

Anyway, I tried a few programmatic ways to determine the best taxonomy, but ultimately manually culled it down to 29 tags. Then, I really didn’t want to have to manually go back and re-tag more than 350 posts. So I turned to AI. It took two attempts. The first one that Cursor planned for me used ML to discern the tags, but that failed spectacularly because it was using frequency of words, not semantic meaning.

So I ultimately tried an LLM approach and that worked. I spec’d it out and had Claude Code write it for me. Then after another hour or so of experimenting and seeing if the resulting tags worked, I let it run concurrently in four terminal windows to process all the posts from the past 20 years. Et voila!

I spot-checked at least half of all the posts manually and made some adjustments. But I’m pretty happy with the results.

See the new tags on the Search page or just click around and explore.

A computer circuit board traveling at warp speed through space with motion-blurred light streaks radiating outward, symbolizing high-performance computing and speed.

The Need for Speed: Why I Rebuilt My Blog with Astro

Two weekends ago, I quietly relaunched my blog. It was a heart transplant really, of the same design I'd launched in late March.

The First Iteration

Back in early November of last year, I re-platformed from WordPress to a home-grown, Cursor-made static site generator. I'd write in Markdown and push code to my GitHub repository and the post was published via Vercel's continuous deployment feature. The design was simple and it was a great learning project for me.

Conceptual 3D illustration of stacked digital notebooks with a pen on top, overlaid on colorful computer code patterns.

Why We Still Need a HyperCard for the AI Era

I rewatched the 1982 film TRON for the umpteenth time the other night with my wife. I have always credited this movie as the spark that got me interested in computers. Mind you, I was nine years old when this film came out. I was so excited after watching the movie that I got my father to buy us a home computer—the mighty Atari 400 (note sarcasm). I remember an educational game that came on cassette called “States & Capitals” that taught me, well, the states and their capitals. It also introduced me to BASIC, and after watching TRON, I wanted to write programs!

In a fascinating thread about designing a typeface in Illustrator versus a font editor, renowned typographer Jonathan Hoefler lets us peek behind the curtains.

But moreover, the reason not to design typefaces in a drawing program is that there, you’re drawing letters in isolation, without regard to their neighbors. Here’s the lowercase G from first corner of the HTF Didot family, its 96pt Light Roman master, which I drew toward the end of 1991. (Be gentle; I was 21.) I remember being delighted by the results, no doubt focussing on that delicate ear, etc. But really, this is only half the picture, because it’s impossible to know if this letter works, unless you give it context. Here it is between lowercase Ns, which establish a typographic ‘control’ for an alphabet’s weight, width, proportions, contrast, fit, and rhythm. Is this still a good G? Should the upper bowl maybe move left a little? How do we feel about its weight, compared to its neighbors? Is the ear too dainty?

Jonathan Hoefler on designing fonts in a drawing program versus a font editor

Threads

Jonathan Hoefler on designing fonts in a drawing program versus a font editor

threads.com iconthreads.com

Figma is adding to its keyboard shortcuts to improve navigation and selection for power users and for keyboard-only users. It’s a win-win that improves accessibility and efficiency. Sarah Kelley, product marketer at Figma writes:

For millions, navigating digital tools with a keyboard isn’t just about preference for speed and ergonomics—it’s a fundamental need. …

We’re introducing a series of new features that remove barriers for keyboard-only designers across most Figma products. Users can now pan the canvas, insert objects, and make precise selections quickly and easily. And, with improved screen reader support, these actions are read aloud as users work, making it easier to stay oriented.

Nice work!

preview-1754373987228.png

Who Says Design Needs a Mouse?

Figma's new accessibility features bring better keyboard and screen reader support to all creators.

figma.com iconfigma.com

My former colleague from Organic, Christian Haas—now ECD at YouTube—has been experimenting with AI video generation recently. He’s made a trilogy of short films called AI Jobs.

Play

You can watch part one above 👆, but don’t sleep on parts two and three.

Haas put together a “behind the scenes” article explaining his process. It’s fascinating and I’ll want to play with video generation myself at some point.

I started with a rough script, but that was just the beginning of a conversation. As I started generating images, I was casting my characters and scouting locations in real time. What the model produced would inspire new ideas, and I would rewrite the script on the fly. This iterative loop continued through every stage. Decisions weren’t locked in; they were fluid. A discovery made during the edit could send me right back to “production” to scout a new location, cast a new character and generate a new shot. This flexibility is one of the most powerful aspects of creating with Gen AI.

It’s a wonderful observation Haas has made—the workflow enabled by gen AI allows for more creative freedom. In any creative endeavor where the production of the final thing is really involved and utilizes a significant amount of labor and materials, be it a film, commercial photography, or software, planning is a huge part. We work hard to spec out everything before a crew of a hundred shows up on set or a team of highly-paid engineers start coding. With gen AI, as shown here with Google’s Veo 3, you have more room for exploration and expression.

UPDATE: I came across this post from Rory Flynn after I published this. He uses diagrams to direct Veo 3.

preview-1754327232920.jpg

Behind the Prompts — The Making of "AI Jobs"

Christian Haas created the first film with the simple goal of learning to use the tools. He didn’t know if it would yield anything worth watching but that was not the point.

linkedin.com iconlinkedin.com

For the past year, CPG behemoth Unilever has been “working with marketing services group Brandtech to build up its Beauty AI Studio: a bespoke, in-house system inside its beauty and wellbeing business. Now in place across 18 different markets (the U.S. and U.K. among them), the studio is being used to make assets for paid social, programmatic display inventory and e-commerce usage across brands including Dove Intensive Repair, TRESemme Lamellar Shine and Vaseline Gluta Hya.”

Sam Bradley, writing in Digiday:

The system relies on Pencil Pro, a generative AI application developed by Brandtech Group. The tool draws on several large language models (LLMs), as well as API access to Meta and TikTok for effectiveness measurement. It’s already used by hearing-care brand Amplifon to rapidly produce text and image assets for digital ad channels.

In Unilever’s process, marketers use prompts and their own insights about target audiences to generate images and video based on 3D renders of each product, a practice sometimes referred to as “digital twinning.” Each brand in a given market is assigned a “BrandDNAi” — an AI tool that can retrieve information about brand guidelines and relevant regulations and that provides further limitations to the generative process.

So far, they haven’t used this system to generate AI humans. Yet.

Inside Unilever’s AI beauty marketing assembly line — and its implications for agencies

The CPG giant has created an AI-augmented in-house production system. Could it be a template for others looking to bring AI in house?

digiday.com icondigiday.com

Kendra Albert, writing in her blog post about Heavyweight, a new tool she built to create “extremely law-firm-looking” letters:

Sometimes, you don’t need a lawyer, you just need to look like you have one.

That’s the idea behind Heavyweight, a project that democratizes the aesthetics of (in lieu of access to) legal representation. Heavyweight is a free, online, and open-source tool that lets you give any complaint you have extremely law-firm-looking formatting and letterhead. Importantly, it does so without ever using any language that would actually claim that the letter was written by a lawyer.

preview-1753379920512.png

Heavyweight: Letters Taken Seriously - Free & Open Legal Letterhead Generator

Generate professional-looking demand letters with style and snootiness

heavyweight.cc iconheavyweight.cc

This is a really well-written piece that pulls the AI + design concepts neatly together. Sharang Sharma, writing in UX Collective:

As AI reshapes how we work, I’ve been asking myself, it’s not just how to stay relevant, but how to keep growing and finding joy in my craft.

In my learning, the new shift requires leveraging three areas

  1. AI tools: Assembling an evolving AI design stack to ship fast
  2. AI fluency: Learning how to design for probabilistic systems
  3. Human-advantage: Strengthening moats like craft, agency and judgment to stay ahead of automation

Together with strategic thinking and human-centric skills, these pillars shape our path toward becoming an AI-native designer.

Sharma connects all the crumbs I’ve been dropping this week:

preview-1752771124483.jpeg

AI tools + AI fluency + human advantage = AI-native designer

From tools to agency, is this what it would take to thrive as a product designer in the AI era?

uxdesign.cc iconuxdesign.cc

In case you missed it, there’s been a major shift in the AI tool landscape.

On Friday, OpenAI’s $3 billion offer to acquire AI coding tool Windsurf expired. Windsurf is the Pepsi to Cursor’s Coke. They’re both IDEs, the programming desktop application that software developers use to code. Think of them as supercharged text editors but with AI built in.

On Friday evening, Google announced that it had hired Windsurf’s CEO Varun Mohan, co-founder Douglas Chen, and several key researchers for $2.4 billion.

On Monday, Cognition, the company behind Devin, the self-described “AI engineer” announced that it had acquired Windsurf for an undisclosed sum, but noting that its remaining 250 employees will “participate financially in this deal.”

Why does this matter to designers?

The AI tools market is changing very rapidly. With AI helping to write these applications, their numbers and features are always increasing—or in this case, maybe consolidating. Choose wisely before investing too deeply into one particular tool. The one piece of advice I would give here is to avoid lock-in. Don’t get tied to a vendor. Ensure that your tool of choice can export your work—the code.

Jason Lemkin has more on the business side of things and how it affects VC-backed startups.

preview-1752536770924.png

Did Windsurf Sell Too Cheap? The Wild 72-Hour Saga and AI Coding Valuations

The last 72 hours in AI coding have been nothing short of extraordinary. What started as a potential $3 billion OpenAI acquisition of Windsurf ended with Google poaching Windsurf’s CEO and co…

saastr.com iconsaastr.com

Geoffrey Litt, Josh Horowitz, Peter van Hardenberg, and Todd Matthews writing a paper for research lab Ink & Switch, offer a great, well-thought piece on what they call “malleable software.”

We envision a new kind of computing ecosystem that gives users agency as co-creators. … a software ecosystem where anyone can adapt their tools to their needs with minimal friction. … When we say ‘adapting tools’ we include a whole range of customizations, from making small tweaks to existing software, to deep renovations, to creating new tools that work well in coordination with existing ones. Adaptation doesn’t imply starting over from scratch.

In their paper, they use analogies like kitchen tools and tool arrangement in a workshop to explore their idea. With regard to the current crop of AI prompt-to-code tools

We think these developments hold exciting potential, and represent a good reason to pursue malleable software at this moment. But at the same time, AI code generation alone does not address all the barriers to malleability. Even if we presume that every computer user could perfectly write and edit code, that still leaves open some big questions.

How can users tweak the existing tools they’ve installed, rather than just making new siloed applications? How can AI-generated tools compose with one another to build up larger workflows over shared data? And how can we let users take more direct, precise control over tweaking their software, without needing to resort to AI coding for even the tiniest change? None of these questions are addressed by products that generate a cloud-hosted application from a prompt.

Kind of a different take than the “personal software” we’ve seen written about before.

preview-1752208778544.jpg

Malleable software: Restoring user agency in a world of locked-down apps

The original promise of personal computing was a new kind of clay. Instead, we got appliances: built far away, sealed, unchangeable. In this essay, we envision malleable software: tools that users can reshape with minimal friction to suit their unique needs.

inkandswitch.com iconinkandswitch.com

Here we go. Figma has just dropped their S-1, or their registration for an initial public offering (IPO).

A financial metrics slide showing Figma's key performance indicators on a dark green background. The metrics displayed are: $821M LTM revenue, 46% YoY revenue growth, 18% non-GAAP operating margin, 91% gross margin, 132% net dollar retention, 78% of Forbes 2000 companies use Figma, and 76% of customers use 2 or more products.

Rollup of stats from Figma’s S-1.

While a lot of the risk factors are boilerplate—legalese to cover their bases—the one about AI is particularly interesting, “Competitive developments in AI and our inability to effectively respond to such developments could adversely affect our business, operating results, and financial condition.”

Developments in AI are already impacting the software industry significantly, and we expect this impact to be even greater in the future. AI has become more prevalent in the markets in which we operate and may result in significant changes in the demand for our platform, including, but not limited to, reducing the difficulty and cost for competitors to build and launch competitive products, altering how consumers and businesses interact with websites and apps and consume content in ways that may result in a reduction in the overall value of interface design, or by otherwise making aspects of our platform obsolete or decreasing the number of designers, developers, and other collaborators that utilize our platform. Any of these changes could, in turn, lead to a loss of revenue and adversely impact our business, operating results, and financial condition.

There’s a lot of uncertainty they’re highlighting:

  • Could competitors use AI to build competing products?
  • Could AI reduce the need for websites and apps which decreases the need for interfaces?
  • Could companies reduce workforces, thus reducing the number of seats they buy?

These are all questions the greater tech industry is asking.

preview-1751405229235.png

Figma Files Registration Statement for Proposed IPO | Figma Blog

An update on Figma's path to becoming a publicly traded company: our S-1 is now public.

figma.com iconfigma.com

Darragh Burke and Alex Kern, software engineers at Figma, writing on the Figma blog:

Building code layers in Figma required us to reconcile two different models of thinking about software: design and code. Today, Figma’s visual canvas is an open-ended, flexible environment that enables users to rapidly iterate on designs. Code unlocks further capabilities, but it’s more structured—it requires hierarchical organization and precise syntax. To reconcile these two models, we needed to create a hybrid approach that honored the rapid, exploratory nature of design while unlocking the full capabilities of code.

The solution turned out to be code layers, actual canvas primitives that can be manipulated just like a rectangle, and respects auto layout properties, opacity, border radius, etc.

The solution we arrived at was to implement code layers as a new canvas primitive. Code layers behave like any other layer, with complete spatial flexibility (including moving, resizing, and reparenting) and seamless layout integration (like placement in autolayout stacks). Most crucially, they can be duplicated and iterated on easily, mimicking the freeform and experimental nature of the visual canvas. This enables the creation and comparison of different versions of code side by side. Typically, making two copies of code for comparison requires creating separate git branches, but with code layers, it’s as easy as pressing ⌥ and dragging. This automatically creates a fork of the source code for rapid riffing.

In my experience, it works as advertised, though the code layer element will take a second to render when its spatial properties are edited. Makes sense though, since it’s rendering code.

preview-1751332174370.png

Canvas, Meet Code: Building Figma’s Code Layers

What if you could design and build on the same canvas? Here's how we created code layers to bring design and code together.

figma.com iconfigma.com

If you want an introduction on how to use Cursor as a designer, here’s a must-watch video. It’s just over half-an-hour long and Elizabeth Lin goes through several demos in Cursor.

Cursor is much more advanced than the AI prompt-to-code tools I’ve covered here before. But with it, you’ll get much more control because you’re building with actual code. (Of course, sigh, you won’t have sliders and inputs for controlling design.)

preview-1750139600534.png

A designer's guide to Cursor: How to build interactive prototypes with sound, explore visual styles, and transform data visualizations | Elizabeth Lin

How to use Cursor for rapid prototyping: interactive sound elements, data visualization, and aesthetic exploration without coding expertise

open.substack.com iconopen.substack.com

David Singleton, writing in his blog:

Somewhere in the last few months, something fundamental shifted for me with autonomous AI coding agents. They’ve gone from a “hey this is pretty neat” curiosity to something I genuinely can’t imagine working without. Not in a hand-wavy, hype-cycle way, but in a very concrete “this is changing how I ship software” way.

I have to agree. My recent tinkering projects with Cursor using Claude 4 Sonnet (and set to Cursor’s MAX mode) have been much smoother and much more autonomous.

And Singleton has found that Claude Code and OpenAI Codex are good for different things:

For personal tools, I’ve completely shifted my approach. I don’t even look at the code anymore - I describe what I want to Claude Code, test the result, make some minor tweaks with the AI and if it’s not good enough, I start over with a slightly different initial prompt. The iteration cycle is so fast that it’s often quicker to start over than trying to debug or modify the generated code myself. This has unlocked a level of creative freedom where I can build small utilities and experiments without the usual friction of implementation details.

And the larger point Singleton makes is that if you direct the right context to the reasoning model, it can help you solve your problem more effectively:

This points to something bigger: there’s an emerging art to getting the right state into the context window. It’s sometimes not enough to just dump code at these models and ask “what’s wrong?” (though that works surprisingly often). When stuck, you need to help them build the same mental framework you’d give to a human colleague. The sequence diagram was essentially me teaching Claude how to think about our OAuth flow. In another recent session, I was trying to fix a frontend problem (some content wouldn’t scroll) and couldn’t figure out where I was missing the correct CSS incantation. Cursor’s Agent mode couldn’t spot it either. I used Chrome dev tools to copy the entire rendered HTML DOM out of the browser, put that in the chat with Claude, and it immediately pinpointed exactly where I was missing an overflow: scroll.

For my designer audience out there—likely 99% of you—I think this post is informative as to how to work with reasoning models like Claude 4 or o4. This can totally apply to prompt-to-code tools like Lovable and v0. And these ideas can likely apply to Figma Make and Subframe.

preview-1750138847348.jpg

Coding agents have crossed a chasm

Coding agents have crossed a chasm Somewhere in the last few months, something fundamental shifted for me with autonomous AI coding agents. They’ve gone from a “hey this is pretty neat” curiosity to something I genuinely can’t imagine working without.

blog.singleton.io iconblog.singleton.io

Peter Yang has been doing some amazing experiments with gen AI tools. There are so many models out there now, so I appreciate him going through and making this post and video.

I made a video testing Claude 4, ChatGPT O3, and Gemini 2.5 head-to-head for coding, writing, deep research, multimodal and more. What I found was that the “best” model depends on what you’re trying to do.

Here’s a handy chart to whet your appetite.

Comparison chart of popular AI tools (ChatGPT, Claude, Gemini, Grok, Perplexity) showing their capabilities across categories like writing, coding, reasoning, web search, and image/video generation, with icons indicating best performance (star), available (check), or unavailable (X). Updated June 2025.

preview-1749163947660.jpg

ChatGPT vs Claude vs Gemini: The Best AI Model for Each Use Case in 2025

Comparing all 3 AI models for coding, writing, multimodal, and 6 other use cases

creatoreconomy.so iconcreatoreconomy.so

I’ve been focused a lot on AI for product design recently, but I think it’s just as important to talk about AI for web design. Though I spend my days now leading a product design team and thinking a lot about UX for creating enterprise software, web design is still a large part of the design industry, as evidenced by the big interest in Framer in the recent Design Tools Survey.

Eric Karkovack writing for The WP Minute:

Several companies have released AI-based site generators; WordPress.com is among the latest. Our own Matt Medeiros took it for a spin. He “chatted” with a friendly bot that wanted to know more about his website needs. Within minutes, he had a website powered by WordPress.

These tools aren’t producing top agency-level websites just yet. Maybe they’re a novelty for the time being. But they’ll improve. With that comes the worry of their impact on freelancers. Will our potential clients choose a bot over a seasoned expert?

Karkovack is right. Current AI tools aren’t making well-thought custom websites yet. So as an agency owner or a freelance designer, you have to defend your position of expertise and customer service:

Those tools have a place in the market. However, freelancers and agencies must position themselves as the better alternative. We should emphasize our expertise and attention to detail, and communicate that AI is a helpful tool, not a magic wand.

But Karkovack misses an opportunity to offer sage advice, which I will do here. Take advantage of these tools in your workflow so that you can be more efficient in your delivery. If you’re in the WordPress ecosystem, use AI to generate some layout ideas, write custom JavaScript, make custom plugins, or write some copy. These AI tools are game-changing, so don’t rest on your laurels.

preview-1749151597255.jpg

What Do AI Site Builders Mean for Freelancers?

Being a freelance web designer often means dealing with disruption. Sometimes, it’s a client who needs a new feature built ASAP. But it can also come from a shakeup in the technology we use. Artificial intelligence (AI) has undoubtedly been a disruptive force. It has upended our workflows and made…

thewpminute.com iconthewpminute.com