Skip to content

64 posts tagged with “tools”

Brand guidelines have always been a compromise. You document the rules—colors, typography, spacing, logo usage—and hope people follow them. They don’t, or they follow the letter while missing the spirit. Every designer who’s inherited a brand system knows the drift: assets that are technically on-brand but feel wrong, or interpretations that stretch “flexibility” past recognition.

Luke Wroblewski is pointing at something different:

Design projects used to end when “final” assets were sent over to a client. If more assets were needed, the client would work with the same designer again or use brand guidelines to guide the work of others. But with today’s AI software development tools, there’s a third option: custom tools that create assets on demand, with brand guidelines encoded directly in.

The key word is encoded. Not documented. Not explained in a PDF that someone skims once. Built into software that enforces the rules automatically.

Wroblewski again:

So instead of handing over static assets and static guidelines, designers can deliver custom software. Tools that let clients create their own on-brand assets whenever they need them.

That is a super interesting way of looking at it.

He built a proof of concept—the LukeW Character Maker—where an LLM rewrites user requests to align with brand style before the image model generates anything. The guidelines aren’t a reference document; they’re guardrails in the code.

This isn’t purely theoretical. When Pentagram designed Performance.gov in 2024, they delivered a library of 1,500 AI-generated icons that any federal agency could use going forward. Paula Scher defended the approach by calling it “self-sustaining”—the deliverable wasn’t a fixed set of illustrations but a system that could produce more:

The problem that’s plagued government publishing is the inability to put together a program because of the interference of different people with different ideas. This solved that.

I think this is an interesting glimpse into the future. Brand guidelines might have software with them. I can even see a day where AI can generate new design system components based on guidelines.

Timeline showing three green construction-worker mascots growing larger from 2000 to 2006, final one with red hard hat reading a blueprint.

Design Tools Are The New Design Deliverables

Design projects used to end when "final" assets were sent over to a client. If more assets were needed, the client would work with the same designer again or us...

lukew.com iconlukew.com

I spent all of last week linking to articles that say designers need to be more strategic. I still stand by that. But that doesn’t mean we shouldn’t understand the technical side of things.

Benhur Senabathi, writing for UX Collective, shipped 3 apps and 15+ working prototypes in 2025 using Claude Code and Cursor. His takeaway:

I didn’t learn to code this year. I learned to orchestrate. The difference matters. Coding is about syntax. Orchestration is about intent, systems, and knowing what ‘done’ looks like. Designers have been doing that for years. The tools finally caught up.

The skills that make someone good at design—defining outcomes, anticipating edge cases, communicating intent to people who don’t share your context—are exactly what AI-assisted building requires.

Senabathi again:

Prompting well isn’t about knowing to code. It’s about articulating the ‘what’ and ‘why’ clearly enough that the AI can handle the ‘how.’

This echoes how Boris Cherny uses Claude Code. Cherny runs 10-15 parallel sessions, treating AI as capacity to orchestrate rather than a tool to use. Same insight, different vantage point: Cherny from engineering, Senabathi from design.

GitHub contributions heatmap reading "701 contributions in the last year" with Jan–Sep labels and varying green activity squares

Designers as agent orchestrators: what I learnt shipping with AI in 2025

Why shipping products matters in the age of AI and what designers can learn from it

uxdesign.cc iconuxdesign.cc

When I managed over 40 creatives at a digital agency, the hardest part wasn’t the work itself—it was resource allocation. Who’s got bandwidth? Who’s blocked waiting on feedback? Who’s deep in something and shouldn’t be interrupted? You learn to think of your team not as individuals you assign tasks to, but as capacity you orchestrate.

I was reminded of that when I read about Boris Cherny’s approach to Claude Code. Cherny is a Staff Engineer at Anthropic who helped build Claude Code. Karo Zieminski, writing in her Product with Attitude Substack, breaks down how Cherny actually uses his own tool:

He keeps ~10–15 concurrent Claude Code sessions alive: 5 in terminal (tabbed, numbered, with OS notifications). 5–10 in the browser. Plus mobile sessions he starts in the morning and checks in on later. He hands off sessions between environments and sometimes teleports them back and forth.

Zieminski’s analysis is sharp:

Boris doesn’t see AI as a tool you use, but as a capacity you schedule. He’s distributing cognition like compute: allocate it, queue it, keep it hot, switch contexts only when value is ready. The bottleneck isn’t generation; it’s attention allocation.

Most people treat AI assistants like a single very smart coworker. You give it a task, wait for the answer, evaluate, iterate. Cherny treats Claude like a team—multiple parallel workers, each holding different context, each making progress while he’s focused elsewhere.

Zieminski again:

Each session is a separate worker with its own context, not a single assistant that must hold everything. The “fleet” approach is basically: don’t make one brain do all jobs; run many partial brains.

I’ve been using Claude Code for months, but mostly one session at a time. Reading this, I realize I’ve been thinking too small. The parallel session model is about working efficiently. Start a research task in one session, let it run while you code in another, check back when it’s ready.

Looks like the new skill on the block is orchestration.

Cartoon avatar in an orange cap beside text "I'm Boris and I created Claude Code." with "6.4M Views" in a sketched box.

How Boris Cherny Uses Claude Code

An in-depth analysis of how Boris Cherny, creator of Claude Code, uses it — and what it reveals about AI agents, responsibility, and product thinking.

open.substack.com iconopen.substack.com

The data from Lenny’s Newsletter’s AI productivity survey showed PMs ranking prototyping as their #2 use case for AI, ahead of designers. Here’s what that looks like in practice.

Figma is now teaching PMs to build prototypes instead of writing PRDs. Using Figma Make, product managers can go from idea to interactive prototype without waiting on design. Emma Webster writing in Figma’s blog:

By turning early directions into interactive, high-fidelity prototypes, you can more easily explore multiple concepts and take ideas further. Instead of spending time writing documentation that may not capture the nuances of a product, prototypes enable you to show, rather than tell.

The piece walks through how Figma’s own PMs use Make for exploration, validation, and decision-making. One PM prototyped a feature flow and ran five user interviews—all within two days. Another used it to workshop scrolling behavior options that were “almost impossible to describe” in words.

The closing is direct about what this means for roles:

In this new landscape, the PMs who thrive will be those who embrace real-time iteration, moving fluidly across traditional role boundaries.

“Traditional role boundaries” being design’s territory.

This isn’t a threat if designers are already operating upstream—defining what to build, not just how it looks. But if your value proposition is “I make the mockups,” PMs now have tools to do that themselves.

Abstract blue scene with potted plants and curving vines, birds perched, a trumpet and ladder amid geometric icons.

Prototypes Are the New PRDs

Inside Figma Make, product managers are pressure-testing assumptions early, building momentum, and rallying teams around something tangible.

figma.com iconfigma.com

The optimistic case for designers in an AI-driven world is that design becomes strategy—defining what to build, not just how it looks. But are designers actually making that shift?

Noam Segal and Lenny Rachitsky, writing for Lenny’s Newsletter, share results from a survey of 1,750 tech workers. The headline is that AI is “overdelivering”—55% say it exceeded expectations, and most report saving at least half a day per week. But the findings by role tell a different story for designers:

Designers are seeing the fewest benefits. Only 45% report a positive ROI (compared with 78% of founders), and 31% report that AI has fallen below expectations, triple the rate among founders.

Meanwhile, founders are using AI to think—for decision support, product ideation, and strategy. They treat it as a thought partner, not a production tool. And product managers are building prototypes themselves:

Compare prototyping: PMs have it at #2 (19.8%), while designers have it at #4 (13.2%). AI is unlocking skills for PMs outside of their core work, whereas designers aren’t seeing the marginal improvement benefits from AI doing their core work.

The survey found that AI helps designers with work around design—research synthesis, copy, ideation—but visual design ranks #8 at just 3.3%. As Segal puts it:

AI is helping designers with everything around design, but pushing pixels remains stubbornly human.

This is the gap. The strategic future is available, but designers aren’t capturing it at the same rate as other roles. The question is why—and what to do about it.

Checked clipboard showing items like Speed, Quality and Research, next to headline "How AI is impacting productivity for tech workers

AI tools are overdelivering: results from our large-scale AI productivity survey

What exactly AI is doing for people, which AI tools have product-market fit, where the biggest opportunities remain, and what it all means

lennysnewsletter.com iconlennysnewsletter.com

Almost a year ago, I linked to Lee Robinson’s essay “Personal Software” and later explored why we need a HyperCard for the AI era. The thesis: people would stop searching the App Store and start building what they need. Disposable tools for personal problems.

That future is arriving. Dominic-Madori Davis, writing for TechCrunch, documents the trend:

It is a new era of app creation that is sometimes called micro apps, personal apps, or fleeting apps because they are intended to be used only by the creator (or the creator plus a select few other people) and only for as long as the creator wants to keep the app. They are not intended for wide distribution or sale.

What I find compelling here is the word “fleeting.” We’ve been conditioned to think of software as permanent infrastructure—something you buy, maintain, and eventually migrate away from. But these micro apps are disposable by design. One founder built a gaming app for his family to play over the holidays, then shut it down when vacation ended. That’s not a failed product. That’s software that did exactly what it needed to do.

Howard University professor Legand L. Burge III frames it well:

It’s similar to how trends on social media appear and then fade away. But now, [it’s] software itself.

The examples in the piece range from practical (an allergy tracker, a parking ticket auto-payer) to whimsical (a “vice tracker” for monitoring weekend hookah consumption). But the one that stuck with me was the software engineer who built his friend a heart palpitation logger so she could show her doctor her symptoms. That’s software as a favor. Software as care.

Christina Melas-Kyriazi from Bain Capital Ventures offers what I think is the most useful framing:

It’s really going to fill the gap between the spreadsheet and a full-fledged product.

This is exactly right. For years, spreadsheets have been the place where non-developers build their own tools—janky, functional, held together with VLOOKUP formulas and conditional formatting. Micro apps are the evolution of that impulse, but with real interfaces and actual logic.

The quality concerns are real—bugs, security flaws, apps that only their creator can debug. But for personal tools that handle personal problems, “good enough for one” is genuinely good enough.

Woman with white angel wings holding a glowing wand, wearing white dress and boots, hovering above a glowing smartphone.

The rise of ‘micro’ apps: non-developers are writing apps instead of buying them

A new era of app creation is here. It’s fun, it’s fast, and it’s fleeting.

techcrunch.com icontechcrunch.com

Last December, Cursor announced their visual editor—a way to edit UI directly in the browser. Karri Saarinen, the designer who co-founded Linear, saw it and called it a trap. Ryo Lu, the head of design at Cursor, pushed back. The Twitter back-and-forth went on for a couple days until they conceded they mostly agreed. Tommy Geoco digs into what the debate actually surfaced.

The traditional way we talk about design tools is floor versus ceiling—does the tool make good design more accessible, or does it push what’s possible? Geoco argues the Saarinen/Lu exchange revealed a second axis: unconstrained exploration versus material exploration. Sketching on napkins versus building in code.

Saarinen’s concern:

Whenever a designer becomes more of a builder, some idealism and creativity dies. It’s not because building is bad, but because you start introducing constraints earlier in the process than you should.

Lu’s counter:

The truth only reveals itself once you start to build. Not when you think about building, not when you sketch possibilities in a protected space, but when you actually make the thing real and let reality talk back.

Both are right, and Geoco’s reframing is useful:

The question is not should designers code. It’s are you using the new speed to explore more territory or just arriving at the same destination faster?

That’s the question I keep asking myself. When I use AI tools, am I discovering ideas I wouldn’t have found otherwise, or am I just getting to obvious ideas faster? The tools make iteration cheap, but cheap iteration on the same territory isn’t progress.

I think about it this way—back when I was starting out, sketching thumbnails was the technique I used. It was very quick and easy to sketch out dozens of ideas in a sketchbook, especially when they were logo or poster ideas. When sketching interaction ideas, the technique is closer to a storyboard—connected thumbnails. But for me, once I get into a high-fidelity design or prototype, there is tremendous pull to just keep tweaking the design rather than coming up with multiple options. In other words, convergence is happening rather than continued divergence.

This was the biggest debate in design [last] year

Two designers: One built Linear. One leads design at Cursor. They got into it on Twitter for 48 hours about the use of AI coding tools in the design work. This debate perfectly captures both sides of what's happening in software design right now. I've spent the year exploring how designers are experimenting on both sides of this argument. This is what I've found.

youtube.com iconyoutube.com
Storyboard grid showing a young man and family: kitchen, driving, airplane, supermarket, night house, grill, dinner.

Directing AI: How I Made an Animated Holiday Short

My first taste of generating art with AI was back in 2021 with Wombo Dream. I even used it to create very trippy illustrations for a series I wrote on getting a job as a product designer. To be sure, the generations were weird, if not even ugly. But it was my first test of getting an image by typing in some words. Both Stable Diffusion and Midjourney gained traction the following year and I tried both as well. The results were never great or satisfactory. Years upon years of being an art director had made me very, very picky—or put another way, I had developed taste.

I didn’t touch generative AI art again until I saw a series of photos by Lars Bastholm playing with Midjourney.

Child in yellow jacket smiling while holding a leash to a horned dragon by a park pond in autumn.

Lars Bastholm created this in Midjourney, prompting “What if, in the 1970s, they had a ‘Bring Your Monster’ festival in Central Park?”

That’s when I went back to Midjourney and started to illustrate my original essays with images generated by it, but usually augmented by me in Photoshop.

In the intervening years, generative AI art tools had developed a common set of functionality that was all very new to me: inpainting, style, chaos, seed, and more. Beyond closed systems like Midjourney and OpenAI’s DALL-E, open source models from Stable Diffusion, Flux, and now a plethora of Chinese models offer even better prompt adherence and controllability via even more opaque-sounding functionality like control nets, LoRAs, CFG, and other parameters. It’s funny to me that for a very artistic field, the associated products to enable these creations are very technical.

This is a fascinating watch. Ryo Lu, Head of Design at Cursor builds a retro Mac calculator using Cursor agents while being interviewed. Lu’s personal website is an homage to Mac OX X, complete with Aqua-style UI elements. He runs multiple local background agents without stepping on each other, fixes bugs live, and themes UI to match system styles so it feels designed—not “purple AI slop,” as he calls it.

Lu, as interview by Peter Yang, on how engineers and designers work together at Cursor (lightly edited for clarity):

So at Cursor, the roles between designers, PM, and engineers are really muddy. We kind of do the part [that is] our unique strength. We use the agent to tie everything. And when we need help, we can assemble people together to work on the thing.

Maybe some of [us] focus more on the visuals or interactions. Some focus more on the infrastructure side of things, where you design really robust architecture to scale the thing. So yeah, there is a lot less separation between roles and teams or even tools that we use. So for doing designs…we will maybe just prototype in Cursor, because that lets us really interact with the live states of the app. It just feels a lot more real than some pictures in Figma.

And surprisingly, they don’t have official product managers at Cursor. Yang asks, “Did you actually actually hire a PM because last time I talked to Lee [Robinson] there was like no PMs.”

Lu again, and edited lightly for clarity:

So we did not hire a PM yet, but we do have an engineer who used to be a founder. He took a lot more of the PM-y side of the job, and then became the first PM of the company. But I would still say a lot of the PM jobs are kind of spread across the builders in the team.

That mostly makes sense because it’s engineers building tools for engineers. You are your audience, which is rare.

Full Tutorial: Design to Code in 45 Min with Cursor's Head of Design | Ryo Lu

Design-to-code tutorial: Watch Cursor's Head of Design Ryo Lu build a retro Mac calculator with agents - a 45-minute, hands-on walkthrough to prototype and ship

youtube.com iconyoutube.com

It’s always interesting for me to read how other designers use AI to vibe code their projects. I think using Figma Make to conjure a prototype is one thing, but vibe coding something in production is entirely different. Personally, I’ve been through it a couple of times that I’ve already detailed here and here.

Anton Sten recently wrote about his process. Like me, he starts in Figma:

This might be the most important part: I don’t start by talking to AI. I start in Figma.

I know Figma. I can move fast there. So I sketch out the scaffolding first—general theme, grids, typography, color. Maybe one or two pages. Nothing polished, just enough to know what I’m building.

Why does this matter? Because AI will happily design the wrong thing for you. If you open Claude Code with a vague prompt and no direction, you’ll get something—but it probably won’t be what you needed. AI is a builder, not an architect. You still have to be the architect.

I appreciate Sten’s conclusion to not let the AI do all of it for you, echoing Dr. Maya Ackerman’s sentiment of humble creative machines:

But—and this is important—you still need design thinking and systems thinking. AI handles the syntax, but you need to know what you’re building, why you’re building it, and how the pieces fit together. The hard part was never the code. The hard part is the decisions.

Vibe coding for designers: my actual process | Anton Sten

An honest breakdown of how I built and maintain antonsten.com using AI—what actually works, where I’ve hit walls, and why designers should embrace this approach.

antonsten.com iconantonsten.com

This episode of Design of AI with Dr. Maya Ackerman is wonderful. She echoed a lot of what I’ve been thinking about recently—how AI can augment what we as designers and creatives can do. There’s a ton of content out there that hypes up AI that can replace jobs—“Type this prompt and instantly get a marketing plan!” or “Type this prompt and get an entire website!”

Ackerman, as interviewed by Arpy Dragffy-Guerrero:

I have a model I developed which is called humble creative machines which is idea that we are inherently much smarter than the AI. We have not reached even 10% of our capacity as creative human beings. And the role of AI in this ecosystem is not to become better than us but to help elevate us. That applies to people who design AI, of course, because a lot of the ways that AI is designed these days, you can tell you’re cut out of the loop. But on the other hand, some of the most creative people, those who are using AI in the most beneficial way, take this attitude themselves. They fight to stay in charge. They find ways to have the AI serve their purposes instead of treating it like an all-knowing oracle. So really, it’s sort of the audacity, the guts to believe that you are smarter than this so-called oracle, right? It’s this confidence to lead, to demand that things go your way when you’re using AI.

Her stance is that those who use AI best are those that wield it and shape its output to match their sensibilities. And so, as we’ve been hearing ad nauseam, our taste and judgement as designers really matters right now.

I’ve been playing a lot with ComfyUI recently—I’m working on a personal project that I’ll share if/when I finish it. But it made me realize that prompting a visual to get it to match what I have in my mind’s eye is not easy. This recent Instagram reel from famed designer Jessica Walsh captures my thoughts well:

I would say most AI output is shitty. People just assumed, “Oh, you rendered that an AI.” “That must have been super easy.” But what they don’t realize is that it took an entire day of some of our most creative people working and pushing the different prompts and trying different tools out and experimenting and refining. And you need a good eye to understand how to curate and pick what the best outputs are. Without that right now, AI is still pretty worthless.

It takes a ton of time to get AI output to look great, beyond prompting: inpainting, control nets, and even Photoshopping. What most non-professionals do is they take the first output from an LLM or image generator and present it as great. But it’s really not.

So I like what Dr. Ackerman mentioned in her episode: we should be in control of the humble machines, not the other way around.

Headshot of a blonde woman in a patterned blazer with overlay text "Future of Human - AI Creativity" and "Design of AI

The Future of Human-AI Creativity [Dr. Maya Ackerman]

AI is threatening creativity, but that's because we're giving too much control to the machine to think on our behalf. In this episode, Dr. Maya Ackerman…

designof.ai icondesignof.ai

When Figma acquired Weavy last month, I wrote a little bit about node-based UIs and ComfyUI. Looks like Adobe has been exploring this user interface paradigm as well.

Daniel John writes in Creative Bloq:

Project Graph is capable of turning complex workflows into user-friendly UIs (or ‘capsules’), and can access tools from across the Creative Cloud suite, including Photoshop, Illustrator and Premiere Pro – making it a potentially game-changing tool for creative pros.

But it isn’t just Adobe’s own tools that Project Graph is able to tap into. It also has access to the multitude of third party AI models Adobe recently announced partnerships with, including those made by Google, OpenAI and many more.

These tools can be used to build a node-based workflow, which can then be packaged into a streamlined tool with a deceptively simple interface.

And from Adobe’s blog post about Project Graph:

Project Graph is a new creative system that gives artists and designers real control and customization over their workflows at scale. It blends the best AI models with the capabilities of Adobe’s creative tools, such as Photoshop, inside a visual, node-based editor so you can design, explore, and refine ideas in a way that feels tactile and expressive, while still supporting the precision and reliability creative pros expect.

I’ve been playing around with ComfyUI a lot recently (more about this in a future post), so I’m very excited to see how this kind of UI can fit into Adobe’s products.

Stylized dark grid with blue-purple modular devices linked by cables, central "Ps" Photoshop

Adobe just made its most important announcement in years

Here’s why Project Graph matters for creatives.

creativebloq.com iconcreativebloq.com

Website screenshot SaaS company Urlbox created a fun project called One Million Screenshots, with, yup, over a million screenshots of the top one million websites. You navigate the page like Google Maps, by zooming in and panning around.

Why? From the FAQ page:

We wanted to celebrate Urlbox taking over 100 million screenshots for customers in 2023… so we thought it would be fun to take an extra 1,048,576 screenshots evey month… did we mention we’re really into screenshots.

(h/t Brad Frost)

One Million Screenshots

One Million Screenshots

Explore the web’s biggest homepage. Discover similar sites. See changes over time. Get web data.

onemillionscreenshots.com icononemillionscreenshots.com

Ryan Feigenbaum created a fun Teenage Engineering-inspired color palette generator he calls “ColorPalette Pro.” Back in 2023, he was experimenting with programatic palette generation. But he didn’t like his work, calling the resulting palettes “gross, with luminosity all over the place, clashing colors, and garish combinations.”

So Feigenbaum went back to the drawing board:

That set off a deep dive into color theory, reading various articles and books like Josef Albers’ Interaction of Color (1963), understanding color space better, all of which coincided with an explosion of new color methods and technical support on the web.

These frustrations and browser improvements culminated in a realization and an app.

Here he is, demoing his app:

Play
COLORPALETTE PRO UI showing Vibrant Violet: color wheel, purple-to-orange swatch grid, and lightness/chroma/hue sliders.

Color Palette Pro — A Synthesizer for Color Palettes

Generate customizable color palettes in advanced color spaces that can be easily shared, downloaded, or exported.

colorpalette.pro iconcolorpalette.pro

He told me his CEO - who’s never written a line of code - was running their company from an AI code editor.

I almost fell out of my chair.

OF COURSE. WHY HAD I NOT THOUGHT OF THAT.

I’ve since gotten rid of almost all of my productivity tools.

ChatGPT, Notion, Todoist, Airtable, Google Keep, Perplexity, my CRM. All gone.

That’s the lede for a piece by Derek Larson on running everything from Claude Code. I’ve covered how Claude Code is pretty brilliant and there are dozens more use cases than just coding.

But getting rid of everything and using just text files and the terminal window? Seems extreme.

Larson uses a skill in Claude Code called “/weekly” to do a weekly review.

  1. Claude looks at every file change since last week
  2. Claude evaluates the state of projects, tasks, and the roadmap
  3. We have a conversation to dig deeper, and make decisions
  4. Claude generates a document summarizing the week and plan we agreed on

Then Claude finds items he’s missed or procrastinating on, and “creates a space to dump everything” on his mind.

Blue furry Cookie Monster holding two baking sheets filled with chocolate chip cookies.

Feed the Beast

AI Eats Software

dtlarson.com icondtlarson.com

Chris Butler wrestles with a generations-old problem in his latest piece: new technologies shortcut the old ways of doing things and therefore quality takes a nosedive. But is it different this time with the tools available to us today?

While design is more accessible than ever, with Adobe experimenting with chat interfaces and Canva offering pro-level design apps for free, putting a tool into the hands of someone doesn’t mean they’ll know how to wield it.

Anyone can now create something that looks professional, that uses modern layouts and typography, that feels designed. But producing something that feels designed does not mean that any design has happened. Most tools don’t ask you what you want someone to do. They don’t force you to make hard choices about hierarchy and priority. They offer you options, and if you don’t already understand the fundamentals of how design guides attention and serves purpose, you’ll end up using too many of them to no end.

Butler concludes that as designers, we’re in a bind because “the pace of change is only accelerating, and it is a serious challenge to designers to determine how much time to spend keeping up.”

You can’t build foundational knowledge while chasing the new. But you can’t ignore the new entirely, or you’ll fall behind. So you split your time, and both efforts can suffer. The fundamentals remain elusive because you’re too busy keeping up. The tools remain half-learned because you’re too busy teaching [design fundamentals to clients].

Butler—nor I—know if there’s a good solution to this problem. Like I said at the start, this is an age-old problem. Friction is a feature, not a bug.

This is just the reality of working in a field that sits at the intersection of human behavior and technological change. Both move, but at different speeds. Human attention, cognition, emotion — these things change slowly, if at all. Technology changes constantly. Design has to navigate both.

And while Butler’s essay never explicitly mentions AI or AI tools, it’s strongly implied. Developers using AI tools to code miss out on the fundamentals of building software. Designers (or their clients) using AI to design face the issues brought up here. Those who use AI to accelerate what they already know, that seems to be The Way.

The Fundamentals Problem

A few months ago, a client was reviewing a landing page design with my team. They had created it themselves using a page builder tool — one of those

chrbutler.com iconchrbutler.com

I’ve been a big fan of node-based UIs since I first experimented with Shake in the early 2000s. It’s kind of weird to wrap your head around, especially if you’re used to layers in Photoshop or Figma. The easiest way to think about nodes is to rotate the layer stack 90-degrees. Each node has inputs on the left, a distinct process that it does to the input, and outputs stuff on the right. You connect up multiple nodes to process assets to form your final composition. Popular apps with node-based workflows today include Unreal Engine (Blueprints), DaVinci Resolve (Fusion and Color), and n8n.

ComfyUI is another open source tool that uses the same node graph architecture. Made in 2023 to add some UI to the visual generative AI models like Stable Diffusion appearing around that time, it’s become popular among artists to wield the plethora of image and video gen AI models.

Fast-forward to last week, when Figma announced they had acquired Weavy, a much friendlier and cloud-based version of ComfyUI.

Weavy brings the world’s leading AI models together with professional editing tools on a single, browser-based canvas. With Weavy, you can choose the model you want for a task (e.g. Seedance, Sora, and Veo for cinematic video; Flux and Ideogram for realism; and Nano-Banana or Seedream for precision) and compose powerful primitives using generative AI outputs and hands-on edits (e.g. adjusting lighting, masking an object, color grading a shot). The end result is an inspiring environment for creative exploration and a flexible media pipeline where every output feeds the next.

This node-based approach brings a new level of craft and control to AI generation. Outputs can be branched, remixed, and refined, combining creative exploration with precision and craft. The Weavy team has inspired us with the balance they’ve struck between simplicity, approachability, and power. They’ve also created a tool that’s just a joy to use.

I must admit I had not heard about Weavy before the announcement. I had high hopes for Visual Electric, but it never quite lived up to its ambitions. I proceeded to watch all the official tutorial videos on YouTube and love it. Seems so much easier to use than ComfyUI. Let’s see what Figma does with the product.

Node-based image editor with connected panels showing a man in a rowboat on water then composited floating over a deep canyon.

Introducing Figma Weave: the next generation of AI-native creation at Figma

Figma has acquired Weavy, a platform that brings generative AI and professional editing tools into the open canvas.

figma.com iconfigma.com

In graphic design news, a new version of the Affinity suite dropped last week, and it’s free. Canva purchased Serif, the company behind the Affinity products, last year. After about a year of engineering, they have combined all the products into a single product to offer maximum flexibility. And they made it free.

Of course then, that sparks debate.

Joe Foley, writing for Creative Bloq explains:

…A natural suspicion of big corporations is causing some to worry about what the new Affinity will become. What’s in it for Canva?

Theories abound. Some think the app will start to show adverts like many free mobile apps do. Others think it will be used to train AI (something Canva denies). Some wonder if Canva’s just doing it to spite Adobe. “Their objective was to undermine Adobe, not provide for paying customers. Revenge instead of progress,” one person thinks.

Others fear Affinity’s tools will be left to stagnate. “If you depend on a software for your design work it needs to be regularly updated and developed. Free software never has that pressure and priority to be kept top notch,” one person writes.

AI features are gated behind paid Canva premium subscription plans. This makes sense as AI features have inference costs. As Adobe is going all out with its AI features, gen AI is now table stakes for creative and design programs.

Photo editor showing a man in a green jacket with gold chains against a purple gradient background, layers panel visible.

Is Affinity’s free Photoshop rival too good to be true?

Designers are torn over the new app.

creativebloq.com iconcreativebloq.com

I’ve been on the receiving end of Layer 1226 before and it’s not fun. While I’m pretty good with my layer naming hygiene, I’m not perfect. So I welcome anything that can help rename my layers. Apparently, when Adobe showed off this new AI feature at their Adobe MAX user conference last week, it drew a big round of applause. (Figma’s had this feature since June 2024.)

There’s more than just renaming layers though. Adobe is leaning into conversational UI for editing too. For new users coming to editing tools, this makes a lot of sense because the learning curve for Photoshop is very steep. But as I’ve always said, professionals will also need fine-grained controls.

Writing for CNET, Katelyn Chedraoui:

Renaming layers is just one of many things Adobe’s new AI assistants will be able to do. These chatbot-like tools will be added to Photoshop and Express. They have an emphasis on “conversational, agentic” experiences — meaning you can ask the chatbot to make edits, and it can independently handle them.

Express’s AI assistant is similar to using a chatbot. Once you toggle on the tool in the upper left corner, a conversation window pops up. You can ask the AI to change the color of an object or remove an obtrusive element. While pro users might be comfortable making those edits manually, the AI assistant might be more appealing to its less experienced users and folks working under a time crunch.

A peek into Adobe’s future reveals more agentic experiences:

Also announced on Tuesday is Project Moonlight, a new platform in beta on Adobe’s AI hub, Firefly. It’s a new tool that hopes to act as a creative partner. With your permission, it uses your data from Adobe platforms and social media accounts to help you create content. For example, you can ask it to come up with 20 ideas for what to do with your newest Lightroom photos based on your most successful Instagram posts in the past. 

These AI efforts represent a range of what conversational editing can look like, Mike Polner, Adobe Firefly’s vice president of product marketing for creators said in an interview. 

“One end of the spectrum is [to] type in a prompt and say, ‘Make my hat blue.’ That’s very simplistic,” said Polner. “With Project Moonlight, it can understand your context, explore and help you come up with new ideas and then help you analyze the content that you already have,” Polner said.

Photoshop AI Assistant UI over stone church landscape with large 'haven' text and command bubbles like 'Increase saturation'.

Photoshop’s New AI Assistant Can Rename All Your Layers So You Don’t Have To

The chatbot-like AI assistant isn’t out yet, but there is at least one practical way to use it.

cnet.com iconcnet.com

It’s interesting to me that Figma had to have a separate conference and set of announcements focused on design systems. In some sense it’s an indicator of how big and mature this part of design has become.

A few highlights from my point-of-view…

Slots seems to solve one of those small UX paper cuts—those niggly inconveniences that we just lived with. But this is a big deal. You’ll be able to add layers within component instances without breaking the connection to your design system. No more pre-building hidden list items or forcing designers to detach components. Pretty advanced stuff.

On the code front, they’re making Code Connect actually approachable with a new UI that connects directly to GitHub and uses AI to map components. The Figma MCP server is out of beta and now supports design system guidelines—meaning your agentic coding tools can actually respect your design standards. Can’t wait to try these.

For teams like mine that are using Make, you’ll be able to pull in design systems through two routes: Make kits (generate React and CSS from Figma libraries) or npm package imports (bring in your existing code components). This is the part where AI-assisted design doesn’t have to mean throwing pixelcraft out the window.

Design systems have always been about maintaining quality at scale. These updates are very welcomed.

Bright cobalt background with "schema" in a maroon bar and light-blue "by Figma" text, stepped columns of orange semicircles on pale-cyan blocks along right and bottom.

Schema 2025: Design Systems For A New Era

As AI accelerates product development, design systems keep the bar for craft and quality high. Here’s everything we announced at Schema to help teams design for the AI era.

figma.com iconfigma.com

With Cursor and Lovable as the darlings of AI coding tools, don’t sleep on Claude Code. Personally, I’ve been splitting my time between Claude Code and Cursor. While Claude Code’s primary persona is coders and tinkerers, it can be used for so much more.

Lenny Rachitsky calls it “the most underrated AI tool for non-technical people.”

The key is to forget that it’s called Claude Code and instead think of it as Claude Local or Claude Agent. It’s essentially a super-intelligent AI running locally, able to do stuff directly on your computer—from organizing your files and folders to enhancing image quality, brainstorming domain names, summarizing customer calls, creating Linear tickets, and, as you’ll see below, so much more.

Since it’s running locally, it can handle huge files, run much longer than the cloud-based Claude/ChatGPT/Gemini chatbots, and it’s fast and versatile. Claude Code is basically Claude with even more powers.

Rachitsky shares 50 of his “favorite and most creative ways non-technical people are using Claude Code in their work and life.”

Everyone should be using Claude Code more

Everyone should be using Claude Code more

How to get started, and 50 ways non-technical people are using Claude Code in their work and life

lennysnewsletter.com iconlennysnewsletter.com

Noah Davis writing in Web Designer Depot, says aloud what I’d thought—but never wrote down—before AI, templates started to kill creativity in web design.

If you’re wondering why the web feels dead, lifeless, or like you’re stuck in a scrolling Groundhog Day of “hero image, tagline, three icons, CTA,” it’s not because AI hallucinated its way into the design department.

It’s because we templatified creativity into submission!

We used to design websites like we were crafting digital homes—custom woodwork, strange hallways, surprise color choices, even weird sound effects if you dared. Each one had quirks. A personality. A soul.

When I was coming up as a designer in the late 1990s and early 2000s, one of my favorite projects was designing Pixar.com. The animation studio’s soul—and by extension the soul I’d imbue into the website—was story. The way this manifest was a linear approach to the site, similar to a slideshow, to tell the story of each of their films.

And as the web design industry grew, and everyone needed and wanted a website, from Fortune 500s to the local barber shop, access to well-designed websites was made possible via templates.

Let’s be real: clients aren’t asking for design anymore. They’re asking for “a site like this.” You know the one. It looks clean. It has animations. It scrolls smoothly. It’s “modern.” Which, in 2025, is just a euphemism for “I want what everyone else has so I don’t have to think.”

Templates didn’t just streamline web development. They rewired what people expect a website to be.

Why hire a designer when you can drop your brand colors into a no-code template, plug in some Lottie files, and call it a day? The end result isn’t bad. It’s worse than bad. It’s forgettable.

Davis ends his rant with a call to action: “If you want design to live, stop feeding the template machine. Build weird stuff. Ugly stuff. Confusing stuff. Human stuff.”

AI Didn’t Kill Web Design —Templates Did It First

AI Didn’t Kill Web Design —Templates Did It First

The web isn’t dying because of AI—it’s drowning in a sea of templates. Platforms like Squarespace, Wix, and Shopify have made building a site easier than ever—but at the cost of creativity, originality, and soul. If every website looks the same, does design even matter anymore?

webdesignerdepot.com iconwebdesignerdepot.com

Auto-Tagging the Post Archive

Since I finished migrating my site from Next.js/Payload CMS to Astro, I’ve been wanting to redo the tag taxonomy for my posts. They’d gotten out of hand over time, and the tag tumbleweed grew to more than 80 tags. What the hell was I thinking when I had both “product design” and “product designer”?

Anyway, I tried a few programmatic ways to determine the best taxonomy, but ultimately manually culled it down to 29 tags. Then, I really didn’t want to have to manually go back and re-tag more than 350 posts. So I turned to AI. It took two attempts. The first one that Cursor planned for me used ML to discern the tags, but that failed spectacularly because it was using frequency of words, not semantic meaning.

So I ultimately tried an LLM approach and that worked. I spec’d it out and had Claude Code write it for me. Then after another hour or so of experimenting and seeing if the resulting tags worked, I let it run concurrently in four terminal windows to process all the posts from the past 20 years. Et voila!

I spot-checked at least half of all the posts manually and made some adjustments. But I’m pretty happy with the results.

See the new tags on the Search page or just click around and explore.

A computer circuit board traveling at warp speed through space with motion-blurred light streaks radiating outward, symbolizing high-performance computing and speed.

The Need for Speed: Why I Rebuilt My Blog with Astro

Two weekends ago, I quietly relaunched my blog. It was a heart transplant really, of the same design I’d launched in late March.

The First Iteration

Back in early November of last year, I re-platformed from WordPress to a home-grown, Cursor-made static site generator. I’d write in Markdown and push code to my GitHub repository and the post was published via Vercel’s continuous deployment feature. The design was simple and it was a great learning project for me.

Conceptual 3D illustration of stacked digital notebooks with a pen on top, overlaid on colorful computer code patterns.

Why We Still Need a HyperCard for the AI Era

I rewatched the 1982 film TRON for the umpteenth time the other night with my wife. I have always credited this movie as the spark that got me interested in computers. Mind you, I was nine years old when this film came out. I was so excited after watching the movie that I got my father to buy us a home computer—the mighty Atari 400 (note sarcasm). I remember an educational game that came on cassette called “States & Capitals” that taught me, well, the states and their capitals. It also introduced me to BASIC, and after watching TRON, I wanted to write programs!