Skip to content

56 posts tagged with “tools”

This is a fascinating watch. Ryo Lu, Head of Design at Cursor builds a retro Mac calculator using Cursor agents while being interviewed. Lu’s personal website is an homage to Mac OX X, complete with Aqua-style UI elements. He runs multiple local background agents without stepping on each other, fixes bugs live, and themes UI to match system styles so it feels designed—not “purple AI slop,” as he calls it.

Lu, as interview by Peter Yang, on how engineers and designers work together at Cursor (lightly edited for clarity):

So at Cursor, the roles between designers, PM, and engineers are really muddy. We kind of do the part [that is] our unique strength. We use the agent to tie everything. And when we need help, we can assemble people together to work on the thing.

Maybe some of [us] focus more on the visuals or interactions. Some focus more on the infrastructure side of things, where you design really robust architecture to scale the thing. So yeah, there is a lot less separation between roles and teams or even tools that we use. So for doing designs…we will maybe just prototype in Cursor, because that lets us really interact with the live states of the app. It just feels a lot more real than some pictures in Figma.

And surprisingly, they don’t have official product managers at Cursor. Yang asks, “Did you actually actually hire a PM because last time I talked to Lee [Robinson] there was like no PMs.”

Lu again, and edited lightly for clarity:

So we did not hire a PM yet, but we do have an engineer who used to be a founder. He took a lot more of the PM-y side of the job, and then became the first PM of the company. But I would still say a lot of the PM jobs are kind of spread across the builders in the team.

That mostly makes sense because it’s engineers building tools for engineers. You are your audience, which is rare.

Full Tutorial: Design to Code in 45 Min with Cursor's Head of Design | Ryo Lu

Design-to-code tutorial: Watch Cursor's Head of Design Ryo Lu build a retro Mac calculator with agents - a 45-minute, hands-on walkthrough to prototype and ship

youtube.com iconyoutube.com

It’s always interesting for me to read how other designers use AI to vibe code their projects. I think using Figma Make to conjure a prototype is one thing, but vibe coding something in production is entirely different. Personally, I’ve been through it a couple of times that I’ve already detailed here and here.

Anton Sten recently wrote about his process. Like me, he starts in Figma:

This might be the most important part: I don’t start by talking to AI. I start in Figma.

I know Figma. I can move fast there. So I sketch out the scaffolding first—general theme, grids, typography, color. Maybe one or two pages. Nothing polished, just enough to know what I’m building.

Why does this matter? Because AI will happily design the wrong thing for you. If you open Claude Code with a vague prompt and no direction, you’ll get something—but it probably won’t be what you needed. AI is a builder, not an architect. You still have to be the architect.

I appreciate Sten’s conclusion to not let the AI do all of it for you, echoing Dr. Maya Ackerman’s sentiment of humble creative machines:

But—and this is important—you still need design thinking and systems thinking. AI handles the syntax, but you need to know what you’re building, why you’re building it, and how the pieces fit together. The hard part was never the code. The hard part is the decisions.

Vibe coding for designers: my actual process | Anton Sten

An honest breakdown of how I built and maintain antonsten.com using AI—what actually works, where I’ve hit walls, and why designers should embrace this approach.

antonsten.com iconantonsten.com

This episode of Design of AI with Dr. Maya Ackerman is wonderful. She echoed a lot of what I’ve been thinking about recently—how AI can augment what we as designers and creatives can do. There’s a ton of content out there that hypes up AI that can replace jobs—“Type this prompt and instantly get a marketing plan!” or “Type this prompt and get an entire website!”

Ackerman, as interviewed by Arpy Dragffy-Guerrero:

I have a model I developed which is called humble creative machines which is idea that we are inherently much smarter than the AI. We have not reached even 10% of our capacity as creative human beings. And the role of AI in this ecosystem is not to become better than us but to help elevate us. That applies to people who design AI, of course, because a lot of the ways that AI is designed these days, you can tell you’re cut out of the loop. But on the other hand, some of the most creative people, those who are using AI in the most beneficial way, take this attitude themselves. They fight to stay in charge. They find ways to have the AI serve their purposes instead of treating it like an all-knowing oracle. So really, it’s sort of the audacity, the guts to believe that you are smarter than this so-called oracle, right? It’s this confidence to lead, to demand that things go your way when you’re using AI.

Her stance is that those who use AI best are those that wield it and shape its output to match their sensibilities. And so, as we’ve been hearing ad nauseam, our taste and judgement as designers really matters right now.

I’ve been playing a lot with ComfyUI recently—I’m working on a personal project that I’ll share if/when I finish it. But it made me realize that prompting a visual to get it to match what I have in my mind’s eye is not easy. This recent Instagram reel from famed designer Jessica Walsh captures my thoughts well:

I would say most AI output is shitty. People just assumed, “Oh, you rendered that an AI.” “That must have been super easy.” But what they don’t realize is that it took an entire day of some of our most creative people working and pushing the different prompts and trying different tools out and experimenting and refining. And you need a good eye to understand how to curate and pick what the best outputs are. Without that right now, AI is still pretty worthless.

It takes a ton of time to get AI output to look great, beyond prompting: inpainting, control nets, and even Photoshopping. What most non-professionals do is they take the first output from an LLM or image generator and present it as great. But it’s really not.

So I like what Dr. Ackerman mentioned in her episode: we should be in control of the humble machines, not the other way around.

Headshot of a blonde woman in a patterned blazer with overlay text "Future of Human - AI Creativity" and "Design of AI

The Future of Human-AI Creativity [Dr. Maya Ackerman]

AI is threatening creativity, but that's because we're giving too much control to the machine to think on our behalf. In this episode, Dr. Maya Ackerman…

designof.ai icondesignof.ai

When Figma acquired Weavy last month, I wrote a little bit about node-based UIs and ComfyUI. Looks like Adobe has been exploring this user interface paradigm as well.

Daniel John writes in Creative Bloq:

Project Graph is capable of turning complex workflows into user-friendly UIs (or ‘capsules’), and can access tools from across the Creative Cloud suite, including Photoshop, Illustrator and Premiere Pro – making it a potentially game-changing tool for creative pros.

But it isn’t just Adobe’s own tools that Project Graph is able to tap into. It also has access to the multitude of third party AI models Adobe recently announced partnerships with, including those made by Google, OpenAI and many more.

These tools can be used to build a node-based workflow, which can then be packaged into a streamlined tool with a deceptively simple interface.

And from Adobe’s blog post about Project Graph:

Project Graph is a new creative system that gives artists and designers real control and customization over their workflows at scale. It blends the best AI models with the capabilities of Adobe’s creative tools, such as Photoshop, inside a visual, node-based editor so you can design, explore, and refine ideas in a way that feels tactile and expressive, while still supporting the precision and reliability creative pros expect.

I’ve been playing around with ComfyUI a lot recently (more about this in a future post), so I’m very excited to see how this kind of UI can fit into Adobe’s products.

Stylized dark grid with blue-purple modular devices linked by cables, central "Ps" Photoshop

Adobe just made its most important announcement in years

Here’s why Project Graph matters for creatives.

creativebloq.com iconcreativebloq.com

Website screenshot SaaS company Urlbox created a fun project called One Million Screenshots, with, yup, over a million screenshots of the top one million websites. You navigate the page like Google Maps, by zooming in and panning around.

Why? From the FAQ page:

We wanted to celebrate Urlbox taking over 100 million screenshots for customers in 2023… so we thought it would be fun to take an extra 1,048,576 screenshots evey month… did we mention we’re really into screenshots.

(h/t Brad Frost)

One Million Screenshots

One Million Screenshots

Explore the web’s biggest homepage. Discover similar sites. See changes over time. Get web data.

onemillionscreenshots.com icononemillionscreenshots.com

Ryan Feigenbaum created a fun Teenage Engineering-inspired color palette generator he calls “ColorPalette Pro.” Back in 2023, he was experimenting with programatic palette generation. But he didn’t like his work, calling the resulting palettes “gross, with luminosity all over the place, clashing colors, and garish combinations.”

So Feigenbaum went back to the drawing board:

That set off a deep dive into color theory, reading various articles and books like Josef Albers’ Interaction of Color (1963), understanding color space better, all of which coincided with an explosion of new color methods and technical support on the web.

These frustrations and browser improvements culminated in a realization and an app.

Here he is, demoing his app:

Play
COLORPALETTE PRO UI showing Vibrant Violet: color wheel, purple-to-orange swatch grid, and lightness/chroma/hue sliders.

Color Palette Pro — A Synthesizer for Color Palettes

Generate customizable color palettes in advanced color spaces that can be easily shared, downloaded, or exported.

colorpalette.pro iconcolorpalette.pro

He told me his CEO - who’s never written a line of code - was running their company from an AI code editor.

I almost fell out of my chair.

OF COURSE. WHY HAD I NOT THOUGHT OF THAT.

I’ve since gotten rid of almost all of my productivity tools.

ChatGPT, Notion, Todoist, Airtable, Google Keep, Perplexity, my CRM. All gone.

That’s the lede for a piece by Derek Larson on running everything from Claude Code. I’ve covered how Claude Code is pretty brilliant and there are dozens more use cases than just coding.

But getting rid of everything and using just text files and the terminal window? Seems extreme.

Larson uses a skill in Claude Code called “/weekly” to do a weekly review.

  1. Claude looks at every file change since last week
  2. Claude evaluates the state of projects, tasks, and the roadmap
  3. We have a conversation to dig deeper, and make decisions
  4. Claude generates a document summarizing the week and plan we agreed on

Then Claude finds items he’s missed or procrastinating on, and “creates a space to dump everything” on his mind.

Blue furry Cookie Monster holding two baking sheets filled with chocolate chip cookies.

Feed the Beast

AI Eats Software

dtlarson.com icondtlarson.com

Chris Butler wrestles with a generations-old problem in his latest piece: new technologies shortcut the old ways of doing things and therefore quality takes a nosedive. But is it different this time with the tools available to us today?

While design is more accessible than ever, with Adobe experimenting with chat interfaces and Canva offering pro-level design apps for free, putting a tool into the hands of someone doesn’t mean they’ll know how to wield it.

Anyone can now create something that looks professional, that uses modern layouts and typography, that feels designed. But producing something that feels designed does not mean that any design has happened. Most tools don’t ask you what you want someone to do. They don’t force you to make hard choices about hierarchy and priority. They offer you options, and if you don’t already understand the fundamentals of how design guides attention and serves purpose, you’ll end up using too many of them to no end.

Butler concludes that as designers, we’re in a bind because “the pace of change is only accelerating, and it is a serious challenge to designers to determine how much time to spend keeping up.”

You can’t build foundational knowledge while chasing the new. But you can’t ignore the new entirely, or you’ll fall behind. So you split your time, and both efforts can suffer. The fundamentals remain elusive because you’re too busy keeping up. The tools remain half-learned because you’re too busy teaching [design fundamentals to clients].

Butler—nor I—know if there’s a good solution to this problem. Like I said at the start, this is an age-old problem. Friction is a feature, not a bug.

This is just the reality of working in a field that sits at the intersection of human behavior and technological change. Both move, but at different speeds. Human attention, cognition, emotion — these things change slowly, if at all. Technology changes constantly. Design has to navigate both.

And while Butler’s essay never explicitly mentions AI or AI tools, it’s strongly implied. Developers using AI tools to code miss out on the fundamentals of building software. Designers (or their clients) using AI to design face the issues brought up here. Those who use AI to accelerate what they already know, that seems to be The Way.

The Fundamentals Problem

A few months ago, a client was reviewing a landing page design with my team. They had created it themselves using a page builder tool — one of those

chrbutler.com iconchrbutler.com

I’ve been a big fan of node-based UIs since I first experimented with Shake in the early 2000s. It’s kind of weird to wrap your head around, especially if you’re used to layers in Photoshop or Figma. The easiest way to think about nodes is to rotate the layer stack 90-degrees. Each node has inputs on the left, a distinct process that it does to the input, and outputs stuff on the right. You connect up multiple nodes to process assets to form your final composition. Popular apps with node-based workflows today include Unreal Engine (Blueprints), DaVinci Resolve (Fusion and Color), and n8n.

ComfyUI is another open source tool that uses the same node graph architecture. Made in 2023 to add some UI to the visual generative AI models like Stable Diffusion appearing around that time, it’s become popular among artists to wield the plethora of image and video gen AI models.

Fast-forward to last week, when Figma announced they had acquired Weavy, a much friendlier and cloud-based version of ComfyUI.

Weavy brings the world’s leading AI models together with professional editing tools on a single, browser-based canvas. With Weavy, you can choose the model you want for a task (e.g. Seedance, Sora, and Veo for cinematic video; Flux and Ideogram for realism; and Nano-Banana or Seedream for precision) and compose powerful primitives using generative AI outputs and hands-on edits (e.g. adjusting lighting, masking an object, color grading a shot). The end result is an inspiring environment for creative exploration and a flexible media pipeline where every output feeds the next.

This node-based approach brings a new level of craft and control to AI generation. Outputs can be branched, remixed, and refined, combining creative exploration with precision and craft. The Weavy team has inspired us with the balance they’ve struck between simplicity, approachability, and power. They’ve also created a tool that’s just a joy to use.

I must admit I had not heard about Weavy before the announcement. I had high hopes for Visual Electric, but it never quite lived up to its ambitions. I proceeded to watch all the official tutorial videos on YouTube and love it. Seems so much easier to use than ComfyUI. Let’s see what Figma does with the product.

Node-based image editor with connected panels showing a man in a rowboat on water then composited floating over a deep canyon.

Introducing Figma Weave: the next generation of AI-native creation at Figma

Figma has acquired Weavy, a platform that brings generative AI and professional editing tools into the open canvas.

figma.com iconfigma.com

In graphic design news, a new version of the Affinity suite dropped last week, and it’s free. Canva purchased Serif, the company behind the Affinity products, last year. After about a year of engineering, they have combined all the products into a single product to offer maximum flexibility. And they made it free.

Of course then, that sparks debate.

Joe Foley, writing for Creative Bloq explains:

…A natural suspicion of big corporations is causing some to worry about what the new Affinity will become. What’s in it for Canva?

Theories abound. Some think the app will start to show adverts like many free mobile apps do. Others think it will be used to train AI (something Canva denies). Some wonder if Canva’s just doing it to spite Adobe. “Their objective was to undermine Adobe, not provide for paying customers. Revenge instead of progress,” one person thinks.

Others fear Affinity’s tools will be left to stagnate. “If you depend on a software for your design work it needs to be regularly updated and developed. Free software never has that pressure and priority to be kept top notch,” one person writes.

AI features are gated behind paid Canva premium subscription plans. This makes sense as AI features have inference costs. As Adobe is going all out with its AI features, gen AI is now table stakes for creative and design programs.

Photo editor showing a man in a green jacket with gold chains against a purple gradient background, layers panel visible.

Is Affinity’s free Photoshop rival too good to be true?

Designers are torn over the new app.

creativebloq.com iconcreativebloq.com

I’ve been on the receiving end of Layer 1226 before and it’s not fun. While I’m pretty good with my layer naming hygiene, I’m not perfect. So I welcome anything that can help rename my layers. Apparently, when Adobe showed off this new AI feature at their Adobe MAX user conference last week, it drew a big round of applause. (Figma’s had this feature since June 2024.)

There’s more than just renaming layers though. Adobe is leaning into conversational UI for editing too. For new users coming to editing tools, this makes a lot of sense because the learning curve for Photoshop is very steep. But as I’ve always said, professionals will also need fine-grained controls.

Writing for CNET, Katelyn Chedraoui:

Renaming layers is just one of many things Adobe’s new AI assistants will be able to do. These chatbot-like tools will be added to Photoshop and Express. They have an emphasis on “conversational, agentic” experiences — meaning you can ask the chatbot to make edits, and it can independently handle them.

Express’s AI assistant is similar to using a chatbot. Once you toggle on the tool in the upper left corner, a conversation window pops up. You can ask the AI to change the color of an object or remove an obtrusive element. While pro users might be comfortable making those edits manually, the AI assistant might be more appealing to its less experienced users and folks working under a time crunch.

A peek into Adobe’s future reveals more agentic experiences:

Also announced on Tuesday is Project Moonlight, a new platform in beta on Adobe’s AI hub, Firefly. It’s a new tool that hopes to act as a creative partner. With your permission, it uses your data from Adobe platforms and social media accounts to help you create content. For example, you can ask it to come up with 20 ideas for what to do with your newest Lightroom photos based on your most successful Instagram posts in the past. 

These AI efforts represent a range of what conversational editing can look like, Mike Polner, Adobe Firefly’s vice president of product marketing for creators said in an interview. 

“One end of the spectrum is [to] type in a prompt and say, ‘Make my hat blue.’ That’s very simplistic,” said Polner. “With Project Moonlight, it can understand your context, explore and help you come up with new ideas and then help you analyze the content that you already have,” Polner said.

Photoshop AI Assistant UI over stone church landscape with large 'haven' text and command bubbles like 'Increase saturation'.

Photoshop’s New AI Assistant Can Rename All Your Layers So You Don’t Have To

The chatbot-like AI assistant isn’t out yet, but there is at least one practical way to use it.

cnet.com iconcnet.com

It’s interesting to me that Figma had to have a separate conference and set of announcements focused on design systems. In some sense it’s an indicator of how big and mature this part of design has become.

A few highlights from my point-of-view…

Slots seems to solve one of those small UX paper cuts—those niggly inconveniences that we just lived with. But this is a big deal. You’ll be able to add layers within component instances without breaking the connection to your design system. No more pre-building hidden list items or forcing designers to detach components. Pretty advanced stuff.

On the code front, they’re making Code Connect actually approachable with a new UI that connects directly to GitHub and uses AI to map components. The Figma MCP server is out of beta and now supports design system guidelines—meaning your agentic coding tools can actually respect your design standards. Can’t wait to try these.

For teams like mine that are using Make, you’ll be able to pull in design systems through two routes: Make kits (generate React and CSS from Figma libraries) or npm package imports (bring in your existing code components). This is the part where AI-assisted design doesn’t have to mean throwing pixelcraft out the window.

Design systems have always been about maintaining quality at scale. These updates are very welcomed.

Bright cobalt background with "schema" in a maroon bar and light-blue "by Figma" text, stepped columns of orange semicircles on pale-cyan blocks along right and bottom.

Schema 2025: Design Systems For A New Era

As AI accelerates product development, design systems keep the bar for craft and quality high. Here’s everything we announced at Schema to help teams design for the AI era.

figma.com iconfigma.com

With Cursor and Lovable as the darlings of AI coding tools, don’t sleep on Claude Code. Personally, I’ve been splitting my time between Claude Code and Cursor. While Claude Code’s primary persona is coders and tinkerers, it can be used for so much more.

Lenny Rachitsky calls it “the most underrated AI tool for non-technical people.”

The key is to forget that it’s called Claude Code and instead think of it as Claude Local or Claude Agent. It’s essentially a super-intelligent AI running locally, able to do stuff directly on your computer—from organizing your files and folders to enhancing image quality, brainstorming domain names, summarizing customer calls, creating Linear tickets, and, as you’ll see below, so much more.

Since it’s running locally, it can handle huge files, run much longer than the cloud-based Claude/ChatGPT/Gemini chatbots, and it’s fast and versatile. Claude Code is basically Claude with even more powers.

Rachitsky shares 50 of his “favorite and most creative ways non-technical people are using Claude Code in their work and life.”

Everyone should be using Claude Code more

Everyone should be using Claude Code more

How to get started, and 50 ways non-technical people are using Claude Code in their work and life

lennysnewsletter.com iconlennysnewsletter.com

Noah Davis writing in Web Designer Depot, says aloud what I’d thought—but never wrote down—before AI, templates started to kill creativity in web design.

If you’re wondering why the web feels dead, lifeless, or like you’re stuck in a scrolling Groundhog Day of “hero image, tagline, three icons, CTA,” it’s not because AI hallucinated its way into the design department.

It’s because we templatified creativity into submission!

We used to design websites like we were crafting digital homes—custom woodwork, strange hallways, surprise color choices, even weird sound effects if you dared. Each one had quirks. A personality. A soul.

When I was coming up as a designer in the late 1990s and early 2000s, one of my favorite projects was designing Pixar.com. The animation studio’s soul—and by extension the soul I’d imbue into the website—was story. The way this manifest was a linear approach to the site, similar to a slideshow, to tell the story of each of their films.

And as the web design industry grew, and everyone needed and wanted a website, from Fortune 500s to the local barber shop, access to well-designed websites was made possible via templates.

Let’s be real: clients aren’t asking for design anymore. They’re asking for “a site like this.” You know the one. It looks clean. It has animations. It scrolls smoothly. It’s “modern.” Which, in 2025, is just a euphemism for “I want what everyone else has so I don’t have to think.”

Templates didn’t just streamline web development. They rewired what people expect a website to be.

Why hire a designer when you can drop your brand colors into a no-code template, plug in some Lottie files, and call it a day? The end result isn’t bad. It’s worse than bad. It’s forgettable.

Davis ends his rant with a call to action: “If you want design to live, stop feeding the template machine. Build weird stuff. Ugly stuff. Confusing stuff. Human stuff.”

AI Didn’t Kill Web Design —Templates Did It First

AI Didn’t Kill Web Design —Templates Did It First

The web isn’t dying because of AI—it’s drowning in a sea of templates. Platforms like Squarespace, Wix, and Shopify have made building a site easier than ever—but at the cost of creativity, originality, and soul. If every website looks the same, does design even matter anymore?

webdesignerdepot.com iconwebdesignerdepot.com

Auto-Tagging the Post Archive

Since I finished migrating my site from Next.js/Payload CMS to Astro, I’ve been wanting to redo the tag taxonomy for my posts. They’d gotten out of hand over time, and the tag tumbleweed grew to more than 80 tags. What the hell was I thinking when I had both “product design” and “product designer”?

Anyway, I tried a few programmatic ways to determine the best taxonomy, but ultimately manually culled it down to 29 tags. Then, I really didn’t want to have to manually go back and re-tag more than 350 posts. So I turned to AI. It took two attempts. The first one that Cursor planned for me used ML to discern the tags, but that failed spectacularly because it was using frequency of words, not semantic meaning.

So I ultimately tried an LLM approach and that worked. I spec’d it out and had Claude Code write it for me. Then after another hour or so of experimenting and seeing if the resulting tags worked, I let it run concurrently in four terminal windows to process all the posts from the past 20 years. Et voila!

I spot-checked at least half of all the posts manually and made some adjustments. But I’m pretty happy with the results.

See the new tags on the Search page or just click around and explore.

A computer circuit board traveling at warp speed through space with motion-blurred light streaks radiating outward, symbolizing high-performance computing and speed.

The Need for Speed: Why I Rebuilt My Blog with Astro

Two weekends ago, I quietly relaunched my blog. It was a heart transplant really, of the same design I’d launched in late March.

The First Iteration

Back in early November of last year, I re-platformed from WordPress to a home-grown, Cursor-made static site generator. I’d write in Markdown and push code to my GitHub repository and the post was published via Vercel’s continuous deployment feature. The design was simple and it was a great learning project for me.

Conceptual 3D illustration of stacked digital notebooks with a pen on top, overlaid on colorful computer code patterns.

Why We Still Need a HyperCard for the AI Era

I rewatched the 1982 film TRON for the umpteenth time the other night with my wife. I have always credited this movie as the spark that got me interested in computers. Mind you, I was nine years old when this film came out. I was so excited after watching the movie that I got my father to buy us a home computer—the mighty Atari 400 (note sarcasm). I remember an educational game that came on cassette called “States & Capitals” that taught me, well, the states and their capitals. It also introduced me to BASIC, and after watching TRON, I wanted to write programs!

In a fascinating thread about designing a typeface in Illustrator versus a font editor, renowned typographer Jonathan Hoefler lets us peek behind the curtains.

But moreover, the reason not to design typefaces in a drawing program is that there, you’re drawing letters in isolation, without regard to their neighbors. Here’s the lowercase G from first corner of the HTF Didot family, its 96pt Light Roman master, which I drew toward the end of 1991. (Be gentle; I was 21.) I remember being delighted by the results, no doubt focussing on that delicate ear, etc. But really, this is only half the picture, because it’s impossible to know if this letter works, unless you give it context. Here it is between lowercase Ns, which establish a typographic ‘control’ for an alphabet’s weight, width, proportions, contrast, fit, and rhythm. Is this still a good G? Should the upper bowl maybe move left a little? How do we feel about its weight, compared to its neighbors? Is the ear too dainty?

Jonathan Hoefler on designing fonts in a drawing program versus a font editor

Threads

Jonathan Hoefler on designing fonts in a drawing program versus a font editor

threads.com iconthreads.com

Figma is adding to its keyboard shortcuts to improve navigation and selection for power users and for keyboard-only users. It’s a win-win that improves accessibility and efficiency. Sarah Kelley, product marketer at Figma writes:

For millions, navigating digital tools with a keyboard isn’t just about preference for speed and ergonomics—it’s a fundamental need. …

We’re introducing a series of new features that remove barriers for keyboard-only designers across most Figma products. Users can now pan the canvas, insert objects, and make precise selections quickly and easily. And, with improved screen reader support, these actions are read aloud as users work, making it easier to stay oriented.

Nice work!

preview-1754373987228.png

Who Says Design Needs a Mouse?

Figma's new accessibility features bring better keyboard and screen reader support to all creators.

figma.com iconfigma.com

My former colleague from Organic, Christian Haas—now ECD at YouTube—has been experimenting with AI video generation recently. He’s made a trilogy of short films called AI Jobs.

Play

You can watch part one above 👆, but don’t sleep on parts two and three.

Haas put together a “behind the scenes” article explaining his process. It’s fascinating and I’ll want to play with video generation myself at some point.

I started with a rough script, but that was just the beginning of a conversation. As I started generating images, I was casting my characters and scouting locations in real time. What the model produced would inspire new ideas, and I would rewrite the script on the fly. This iterative loop continued through every stage. Decisions weren’t locked in; they were fluid. A discovery made during the edit could send me right back to “production” to scout a new location, cast a new character and generate a new shot. This flexibility is one of the most powerful aspects of creating with Gen AI.

It’s a wonderful observation Haas has made—the workflow enabled by gen AI allows for more creative freedom. In any creative endeavor where the production of the final thing is really involved and utilizes a significant amount of labor and materials, be it a film, commercial photography, or software, planning is a huge part. We work hard to spec out everything before a crew of a hundred shows up on set or a team of highly-paid engineers start coding. With gen AI, as shown here with Google’s Veo 3, you have more room for exploration and expression.

UPDATE: I came across this post from Rory Flynn after I published this. He uses diagrams to direct Veo 3.

preview-1754327232920.jpg

Behind the Prompts — The Making of "AI Jobs"

Christian Haas created the first film with the simple goal of learning to use the tools. He didn’t know if it would yield anything worth watching but that was not the point.

linkedin.com iconlinkedin.com

For the past year, CPG behemoth Unilever has been “working with marketing services group Brandtech to build up its Beauty AI Studio: a bespoke, in-house system inside its beauty and wellbeing business. Now in place across 18 different markets (the U.S. and U.K. among them), the studio is being used to make assets for paid social, programmatic display inventory and e-commerce usage across brands including Dove Intensive Repair, TRESemme Lamellar Shine and Vaseline Gluta Hya.”

Sam Bradley, writing in Digiday:

The system relies on Pencil Pro, a generative AI application developed by Brandtech Group. The tool draws on several large language models (LLMs), as well as API access to Meta and TikTok for effectiveness measurement. It’s already used by hearing-care brand Amplifon to rapidly produce text and image assets for digital ad channels.

In Unilever’s process, marketers use prompts and their own insights about target audiences to generate images and video based on 3D renders of each product, a practice sometimes referred to as “digital twinning.” Each brand in a given market is assigned a “BrandDNAi” — an AI tool that can retrieve information about brand guidelines and relevant regulations and that provides further limitations to the generative process.

So far, they haven’t used this system to generate AI humans. Yet.

Inside Unilever’s AI beauty marketing assembly line — and its implications for agencies

The CPG giant has created an AI-augmented in-house production system. Could it be a template for others looking to bring AI in house?

digiday.com icondigiday.com

Kendra Albert, writing in her blog post about Heavyweight, a new tool she built to create “extremely law-firm-looking” letters:

Sometimes, you don’t need a lawyer, you just need to look like you have one.

That’s the idea behind Heavyweight, a project that democratizes the aesthetics of (in lieu of access to) legal representation. Heavyweight is a free, online, and open-source tool that lets you give any complaint you have extremely law-firm-looking formatting and letterhead. Importantly, it does so without ever using any language that would actually claim that the letter was written by a lawyer.

preview-1753379920512.png

Heavyweight: Letters Taken Seriously - Free & Open Legal Letterhead Generator

Generate professional-looking demand letters with style and snootiness

heavyweight.cc iconheavyweight.cc

This is a really well-written piece that pulls the AI + design concepts neatly together. Sharang Sharma, writing in UX Collective:

As AI reshapes how we work, I’ve been asking myself, it’s not just how to stay relevant, but how to keep growing and finding joy in my craft.

In my learning, the new shift requires leveraging three areas

  1. AI tools: Assembling an evolving AI design stack to ship fast
  2. AI fluency: Learning how to design for probabilistic systems
  3. Human-advantage: Strengthening moats like craft, agency and judgment to stay ahead of automation

Together with strategic thinking and human-centric skills, these pillars shape our path toward becoming an AI-native designer.

Sharma connects all the crumbs I’ve been dropping this week:

preview-1752771124483.jpeg

AI tools + AI fluency + human advantage = AI-native designer

From tools to agency, is this what it would take to thrive as a product designer in the AI era?

uxdesign.cc iconuxdesign.cc

In case you missed it, there’s been a major shift in the AI tool landscape.

On Friday, OpenAI’s $3 billion offer to acquire AI coding tool Windsurf expired. Windsurf is the Pepsi to Cursor’s Coke. They’re both IDEs, the programming desktop application that software developers use to code. Think of them as supercharged text editors but with AI built in.

On Friday evening, Google announced that it had hired Windsurf’s CEO Varun Mohan, co-founder Douglas Chen, and several key researchers for $2.4 billion.

On Monday, Cognition, the company behind Devin, the self-described “AI engineer” announced that it had acquired Windsurf for an undisclosed sum, but noting that its remaining 250 employees will “participate financially in this deal.”

Why does this matter to designers?

The AI tools market is changing very rapidly. With AI helping to write these applications, their numbers and features are always increasing—or in this case, maybe consolidating. Choose wisely before investing too deeply into one particular tool. The one piece of advice I would give here is to avoid lock-in. Don’t get tied to a vendor. Ensure that your tool of choice can export your work—the code.

Jason Lemkin has more on the business side of things and how it affects VC-backed startups.

preview-1752536770924.png

Did Windsurf Sell Too Cheap? The Wild 72-Hour Saga and AI Coding Valuations

The last 72 hours in AI coding have been nothing short of extraordinary. What started as a potential $3 billion OpenAI acquisition of Windsurf ended with Google poaching Windsurf’s CEO and co…

saastr.com iconsaastr.com

Geoffrey Litt, Josh Horowitz, Peter van Hardenberg, and Todd Matthews writing a paper for research lab Ink & Switch, offer a great, well-thought piece on what they call “malleable software.”

We envision a new kind of computing ecosystem that gives users agency as co-creators. … a software ecosystem where anyone can adapt their tools to their needs with minimal friction. … When we say ‘adapting tools’ we include a whole range of customizations, from making small tweaks to existing software, to deep renovations, to creating new tools that work well in coordination with existing ones. Adaptation doesn’t imply starting over from scratch.

In their paper, they use analogies like kitchen tools and tool arrangement in a workshop to explore their idea. With regard to the current crop of AI prompt-to-code tools

We think these developments hold exciting potential, and represent a good reason to pursue malleable software at this moment. But at the same time, AI code generation alone does not address all the barriers to malleability. Even if we presume that every computer user could perfectly write and edit code, that still leaves open some big questions.

How can users tweak the existing tools they’ve installed, rather than just making new siloed applications? How can AI-generated tools compose with one another to build up larger workflows over shared data? And how can we let users take more direct, precise control over tweaking their software, without needing to resort to AI coding for even the tiniest change? None of these questions are addressed by products that generate a cloud-hosted application from a prompt.

Kind of a different take than the “personal software” we’ve seen written about before.

preview-1752208778544.jpg

Malleable software: Restoring user agency in a world of locked-down apps

The original promise of personal computing was a new kind of clay. Instead, we got appliances: built far away, sealed, unchangeable. In this essay, we envision malleable software: tools that users can reshape with minimal friction to suit their unique needs.

inkandswitch.com iconinkandswitch.com