Skip to content

42 posts tagged with “creativity”

Humans are the bread in the sandwich, and the AI is in the middle.

That’s Dan Shipper on his podcast AI & I, talking with Every’s Kieran Klaassen, the engineer behind the compound engineering plugin. They’re working out where humans actually belong in an AI-driven workflow. It’s the same split showing up on the design side.

Klaassen, on the polish step at the end of the work:

The other moment comes at the end. Something comes out. How do you validate it? Well, it’s already tested—browser automated testing has clicked through everything, all the requirements are clearly specified, and it says everything works. But the beauty comes in when a human looks at it, clicks around, and has a feel for it: “Oh, this doesn’t feel right. We can polish it. We can make it better. There’s something still missing. We can make the design better.” […] all the way at the end, when everything is done, you can elevate everything and make it even better. And I think we need to do that, because if we don’t, it will all be slop—all the same. It’s very important to make it feel great because the bar is high, and the bar will always get higher.

“It will all be slop” is the line every team should have taped to a monitor. A passing test suite and a green PR don’t tell you whether the thing is actually any good. That judgment still lives with a human at the end of the workflow. Klaassen is correct that the bar keeps moving up, not down, and the teams who treat the polish step as optional are the ones whose products will look interchangeable in twelve months.

Klaassen, on the art-and-ownership argument:

But I do think that in the end, if you ship something—if you make a statement in the world—and you want it to be your own, you have to say yes or no at some point. You cannot fully automate everything. It’s a bit like making art. If you want it to be yours, it needs to come from you or somehow be connected. So I believe having those moments where you decide—where you choose what you enjoy—is so important. That’s why it’s so important to do things you enjoy and love.

Whatever your version of beautiful is, that’s the bread. Everything else is filling.

Cover art for "AI & I" podcast by Every, featuring a smiling man with glasses rendered in gold tones against a purple background.

The AI Sandwich: Where Humans Excel in an AI World

‘AI & I’ with compound engineering creator Kieran Klaassen

every.to iconevery.to

Design orgs and publications have been issuing AI bans, calling them principled responses to job displacement, training data theft, and the degradation of craft. The impulse is understandable: AI doesn’t just replace tools; it challenges what made you worth hiring, and the prospect of losing what you’ve built is felt more sharply than any potential gain. Christopher Butler thinks those lines are drawn in the wrong place:

By drawing hard lines against entire categories of tools, we’re mistaking the means for the problem itself, and in doing so, we’re limiting our ability to shape how these technologies integrate into creative work.

Butler doesn’t dismiss the concerns driving those bans: training data problems, corporate consolidation, job displacement. He thinks they’re legitimate and urgent. His objection is to making the tool the target rather than the behavior. Drawing the line at AI, he argues, repeats the mistake designers made at the letterpress and again at paste-up. The technology changed. The question—about authorship, judgment, and what craft actually requires—stayed the same.

Butler’s conclusion:

A designer who uses AI to plagiarize another artist’s style with a simple prompt is engaged in something fundamentally different from one who trains a tool to extend their own creative capacity. A writer who publishes purely generated text as their own work is making a different choice than one who uses AI as a thinking partner and editor while maintaining authorship over their ideas and voice. These distinctions matter more than blanket prohibitions.

Discernment in practice means asking: Am I using this tool to extend my own capabilities or to replicate someone else’s work? Am I shaping the output or simply accepting what’s generated? Does this use serve my creative vision or just expedite a result? These aren’t always easy questions, but they’re the right ones.

Butler himself is the illustration. He spent months training Claude on a 10,000-word skill file—the accumulated context of his subject matter and his voice—building a sounding board and editor that already knows his context. He still writes without it. He says some of his best writing has come from working with it. The output may be indistinguishable to most readers. The difference, he says, is real to him.

The choice isn’t between purity and complicity, between craft and automation. It’s between engagement and abdication—between shaping how these tools develop and how they’re used, or ceding that ground entirely to those with the least interest in protecting what we value about creative work.

Four-panel collage featuring a close-up microchip, a red diagonal line on blue background, an open human hand in black and white, and grid paper partially lit by light.

Red-lining AI - Christopher Butler

Why blanket AI bans mistake the tool for the problem, and how thoughtful integration of automation, ethics, and creative work offers a better path forward.

chrbutler.com iconchrbutler.com

“Taste is the scarce thing” has become shorthand for what designers still own in the AI era. I’ve written about it in the abstract more than once. Chris R Becker, writing for UX Collective, opens with an old Marshall McLuhan-era line—“we shape our tools and then our tools shape us”—and then shows how to keeping doing the shaping.

Becker cites the Steve Jobs-attributed 10-80-10 rule:

Start away from any AI. Use the 10–80–10 rule. 10% away thinking, defining, establishing vision. 80% making use of AI to assist the vision. 10% away from AI critiquing, testing, and evaluating the solution.

The bookends are the work. Both 10% slots sit explicitly away from the model, which is another way of saying they’re the judgment layer. The first defines what good looks like before inviting AI in. The second evaluates what came out. AI collapses the cost of the 80%, which is the whole productivity story. But that collapse means the bookends are no longer preamble and postscript. They’re most of the job.

Becker gets at why the closing 10% matters:

The authority bestowed on institutions, educators, and SMEs (subject matter experts) is being absorbed by AI and spread thin like butter on toast. An AI appears to slather knowledge evenly, but the quality of the knowledge butter is deliberately made opaque.

AI output arrives looking uniformly authoritative, the same confident tone whether the underlying source is a peer-reviewed paper or a forum post from 2013. Provenance gets flattened. Without a prior standard to judge against, the designer reviewing output has nothing to push back on. That’s Becker’s larger point:

The irony, I suppose, is that Designers are, hopefully, trained not to be “yes men” but rather to ask hard questions, challenge the prevailing motivations of business over our users, and, most importantly, find the root cause of the problem, rather than just the surface reaction. AI, unfortunately, is not built to push back; it will not say… “I don’t know,” or “I think that is a bad idea,” or “what if you did this… instead,” or “I understand YOU (CEO) wants this feature, but the user research and ‘our users’ want something different.” AI is designed to serve, and in the hands of people in an organization who are looking for the least amount of pushback, it is a recipe for deep institutional implementation and, frankly, a lot of bad ideas, fast.

“A recipe for deep institutional implementation.” A sycophantic tool plus an organization that wants frictionless agreement equals speed in the wrong direction. The 10-80-10 rule is a personal discipline. What’s still unresolved is how teams build that discipline into the process before the wrong direction becomes the default.

Pen-and-ink illustration of a thoughtful man seated in a chair holding a hammer, with rows of large server racks filling a data center behind him.

We become what we behold

A discussion of AI + Design and our shifting roles.

uxdesign.cc iconuxdesign.cc

My current side project is a website for a preschool in San Francisco. I’m using AI to accelerate wherever it fits, but I’ve reserved the primary visual treatments to be made by hand. Partly because that’s the right call for a preschool brand. And partly because of a phrase Pablo Stanley coined for this: creativity osteoporosis.

I wrote about creativity osteoporosis a while back. The idea that your creative skills get weaker when AI does all the reps, like bones thinning when they’re not stressed. You don’t notice it happening. Everything seems fine. Then one day you reach for a skill and it’s… not there like it used to be

Stanley wrote this after a weekend of making pixel art by hand—a project called Pixabots, little 32x32 robot characters—as a deliberate detox. He describes what set off the detox:

The whole time I was drawing, there was this pull. Physical, almost. Like my body was telling me to open a tab and start prompting. Not because the work was bad. Not because I was stuck. Just because my brain has been trained, over the last two years, to route every creative problem through an LLM.

He still used AI for the parts that weren’t the art:

I used AI to build the Pixabots website. The stuff I’m not that good at… setting up Next.js, canvas rendering, exporting without antialiasing. And I tried to keep to myself the stuff that felt more “artistic” like the animation, the look and feel.

And then the operating principle:

The parts that feed my soul, I protected (even though everything in my body wanted to pull me away from them). The parts that would’ve killed the project with friction, I automated.

Maybe that’s the whole game now… knowing which parts to protect…

Knowing which parts to protect is becoming a judgment call I have to make on every project. The preschool site makes the decision easy: the visual language stays in my hands, AI handles the plumbing. The real work of this judgment is in the middle: projects where craft matters but throughput has merit, and every protect-or-automate call costs you something. If you don’t draw that line on purpose, it draws itself for you.

A grid of colorful pixel art robot and creature characters in various designs, colors, and accessories, displayed against a white background.

AI feels like a drug

I forced myself to make pixel art by hand. My brain had withdrawal symptoms.

pablostanley.substack.com iconpablostanley.substack.com

When generation gets cheap, craft becomes judgment. Raj Nandan Sharma, writing on his blog, puts it bluntly:

Before AI, mediocre work often reflected a lack of time, resources, or execution skill. Today mediocre work often means something else: the person stopped at the first acceptable draft. That is the economic shift AI introduces. It compresses the cost of first drafts, which means the value moves downstream… In other words, the scarce skill is not generation. It is refusal.

Refusal—knowing what to throw out and why—is what’s scarce in a world where anyone can generate ten competent drafts before lunch.

But Sharma doesn’t stop there. He warns that elevating taste alone can quietly corner humans into an end-of-pipeline selector role:

There is a strong version of the “taste matters” argument that quietly pushes humans into a narrow role. In that version, AI generates many outputs and the human stands at the end of the pipeline selecting the best one. That is a useful role, but it is also too small… The warning is not that taste has no value. It does. The warning is that taste without authorship, stake, or construction can become a narrow and eventually fragile role.

The warning Sharma adds is the part the “taste is the moat” conversation tends to skip. Refusal without authorship is still selector work, and selector work has a ceiling. The durable position pairs refined taste with authorship—owning what ships and the stake for getting it wrong.

Abstract swirling ink or fluid art in dark and pink tones with white text reading "Good Taste: The Only Real Moat Left.

Good Taste the Only Real Moat Left

AI makes competent output cheap. That makes taste more valuable, but also more incomplete. The real edge comes from pairing judgment with context, stakes, and the willingness to build.

rajnandan.com iconrajnandan.com

I’ve written that AI-era design work reduces to taste and judgment. Elizabeth Goodspeed’s case for designer-writers gets there from a different direction.

Elizabeth Goodspeed, writing for It’s Nice That:

You can get away with a lot in design: conceptual ideas are able to sit inside a visual piece of work without ever being fully spelled out. They’re gestured at rather than articulated. Writing forces you to figure out exactly what your idea is; if it isn’t working, you’ll know immediately. Where design is like a ballet – implicit ideas carried through form – then writing is closer to a theatre – your thinking has to be explicitly spoken.

Goodspeed’s point is that design lets you gesture at an idea without ever articulating it, and writing forces you to name it. A designer who can’t explain why a choice works has taste they can’t grow or pass on.

Goodspeed’s second point goes further:

Writing is to graphic design what clay is to pottery. It’s the material designer shape and massage into form. To work with text well, you have to really be able to read and understand what you’re setting – not just how it looks and basics like not hyphenating a word in a bad spot, but what it means on a deeper level. Just as reading makes you a better writer, writing makes you a better reader.

Product designers don’t usually think of themselves as writers. But user stories are writing, and articulating what a user should be able to do through an experience and why is essential.

Worth reading in full. She makes writing feel like a design discipline.

Bold black text reading "Placeholder Text" and "Elizabeth Goodspeed" on a pink background, flanked by columns of lorem ipsum-style body copy.

Elizabeth Goodspeed on why design writing needs designers writing

Without designers writing about their own work, design is easy to misunderstand. Writing helps designers work through what they think – and makes that thinking visible to others.

itsnicethat.com iconitsnicethat.com

Every few weeks, another essay or YouTube video announces that AI has killed craft. One of my favorite designers writing about design, Christopher Butler, goes the other way:

No knowledge I possess about design—the incorporeal understanding that makes what I create better than an off-the-shelf template or something done by someone without my experience—is made irrelevant by AI. Nor is it contradicted by my use of AI tools. Structure still communicates before content. Visual hierarchy still guides attention. Negative space still creates rhythm. These principles don’t vanish because I’m working through AI rather than directly manipulating pixels. The craft migrates to a different level of abstraction. But it remains craft.

Butler’s claim is that the principles don’t vanish; they operate at a higher altitude. The unfinished part is naming where that altitude actually is. For product designers, it’s concept and hierarchy: the decisions that require knowing the user and the stake someone is willing to carry. The generated layout and the choice of components are still outputs. What’s left of design is the judgment that picks between them.

Butler’s sharper line is the binary between consumption and practice:

Someone who generates an interface with AI and calls it done isn’t practicing craft. They’re consuming convenience. Someone who generates an interface, inspects it, questions what it’s actually communicating, refines the structure, generates again, compares variations, understands why one serves the user better than another—they’re practicing craft. They’re building knowledge through iteration. The tool doesn’t determine whether you’re working with craft. Your approach does.

That’s Jiro Ono’s shokunin applied to interfaces: craft as lifelong practice, not manual labor. A camera doesn’t take a picture, and a model doesn’t make a design. That decision is the craft.

Butler’s argument reassures me. What worries me is how optional that decision is becoming. The output already looks finished. The designers who keep asking why one version serves the user better than another will still be designers in five years. The rest may still have jobs, as operators of a tool doing the work their taste used to do.

Close-up of a vibrant fingerprint with swirling ridge patterns in orange, red, blue, and yellow iridescent colors with glittery highlights.

Craft is Untouchable

I have a vested interest in the title of this piece being true. I’ve spent decades developing craft—not just making things, but understanding systems, seeing patterns, making judgments that can’t be reduced to prompts. If AI eliminates the need for that expertise, I’m in trouble.

chrbutler.com iconchrbutler.com

Silicon Valley’s pitch to designers is that AI is the more knowledgeable partner now, so they should get good at prompting it. Write better instructions, get better output.

Peter Zakrzewski, writing for UX Collective, pushes back:

The current Silicon Valley pitch to designers is essentially this: AI is your MKO now. It knows more patterns than you do. It executes faster than you do. It can code. Your job is to learn how to give it good instructions — to become a fluent prompter of a more capable system. I want to challenge that framing directly.

His challenge starts with a concrete test. He asked three leading AI systems to render a dining table with a concrete slab top resting on dry spaghetti legs, then show the scene five seconds after the legs gave way. All three rendered the impossibility with total confidence. None could feel that the physics don’t work.

That test illustrates what Zakrzewski calls the Inversion Error:

We have built a Symbolic Giant resting on an Enactive Void. These systems can write about gravity with technical or even poetic fluency but cannot feel it. They can describe a structure but cannot tell you whether it will stand or fall. The ground is shaking because the floor is missing.

“Symbolic Giant resting on an Enactive Void” is a mouthful, but the floor metaphor does the work: AI’s language fluency masks a total absence of spatial, embodied reasoning. The kind designers rely on every day without naming it. Zakrzewski on what that means for the prompting pitch:

Designers do not think primarily in sentences. Our human cognition is deeply embodied. We think in diagrams, in spatial relationships, in load paths and sight lines and in the non-discursive logic of things that must connect to other things in three-dimensional space. […] We are being asked to compress years of embodied cognition and our three-dimensional spatial judgment into a text prompt and then accept whatever the machine generates as an adequate rendering of our intent. We are, in other words, being asked to abandon the very capability that the AI lacks and that our projects require.

When someone tells designers to compress spatial judgment into a text prompt, they’re asking designers to throw away the one capability AI genuinely lacks and the one we’re genuinely great at.

There was a theme to some of the posts on this blog last week—about how words should come before the pixels. I made a similar argument in the newsletter: the work is getting more verbal and conceptual, but the eye stays. Zakrzewski makes the case for what words alone can’t carry: the spatial, embodied judgment that tells you whether the thing will actually stand.

A mechanical robotic hand reaching upward against a stormy sky, overlaid with a bold red banner reading "Form follows nothing.

The ground is shaking: Why designers must flip the script on AI

Something has shifted in the way the design field operates, and I think most of us can sense it even if we haven’t yet found the words or…

uxdesign.cc iconuxdesign.cc

I’ve written before about the shokunin mentality: design is a lifelong practice, and the identity you build around craft is a source of resilience, not a liability.

Dora Czerna, writing for UX Collective, identifies why AI disruption hits designers harder than most:

Design, like writing or art, tends to get tangled up with identity. It’s not just a job; it’s a way of seeing the world, a source of status, a thing that makes you you. When a tool arrives that can approximate your output, it doesn’t just threaten your income. It can threaten your sense of self.

Czerna is describing a vulnerability. I’d call it the advantage. The designers who survived the DTP revolution, the ones who made the leap from paste-up boards to Quark and PageMaker, weren’t the ones who shrugged and said “it’s just a job.” They were the ones who cared enough about the craft to learn the new tools and drag their high standards into the new medium.

Czerna gets at why that caring matters:

The pattern isn’t that expertise becomes worthless. It’s that expertise gets unbundled from the tasks that used to contain it. When a tool automates the mechanical parts of a job, what remains is the sensibility that guided those mechanics in the first place. The typesetter’s eye for spacing didn’t disappear when PageMaker arrived; it became the designer’s eye for spacing, operating at a higher level of abstraction.

That sensibility is what identity protects. When you see design as who you are, you follow the craft wherever the tools take it. When you see it as what you do, you’re more likely to stop when the tasks change. Czerna’s article is a thorough historical walk through disruption’s recurring shape, and it’s worth your time.

Illustration of a vintage printing press connected by cables to geometric shapes and a retro Macintosh-style computer, symbolizing the evolution of publishing.

Disruption has a shape. Design history shows us what it is.

Democratisation, panic, quality collapse, then new norms emerging. This isn’t new terrain.

uxdesign.cc iconuxdesign.cc

Most AI tools start with a blank chatbox. OK, maybe not completely blank. Often there is a gallery of examples right below the input. But it’s still hard to come up with something original when faced with a blinking cursor.

Brad Frost calls this moment “the Creative Infinite”:

Never before in human history has it been possible for anyone to simply ask for something to exist, and then it just…exists. Where the inputs can be anything, the outputs can be anything, and the whole process can be repeated, iterated, combined, translated, and chained together indefinitely.

He makes the case concrete with his 8-year-old daughter:

In 5 minutes, Ella vibe-coded a playable game (built in Three.js via Claude Cowork) running in the browser. That’s just bonkers. At no point in human history has it been possible to simply describe a game in words and then just… play it 5 minutes later.

A Michael McDonald (interesting taste for an 8 year-old!) penguin adventure, because she knew it would make her dad laugh. The capability is real and the story is delightful. But then Frost hedges:

Your existing creative fluency still matters, maybe even more than before? Just as being able to play piano puts you in a better spot to wield a synthesizer. Knowing how to design makes you better at prompting visual tools. Understanding code makes you better at architecting what you want to build with AI. Craft. Taste. Art. Authentic expression. Purpose.

Yes, creative fluency matters more. It absolutely does. The piano-to-synthesizer analogy is exactly right: the tool revolutions I’ve lived through have compounded on existing skill, not replaced it. A designer who understands visual hierarchy and restraint will direct AI better than someone who’s never thought about why one layout works and another doesn’t.

A music trivia game scene with cartoon penguins and block-figure players surrounding a "Michael McDonald" stage, with an orange tooltip reading "I named my CAT after this man!!

The Creative Infinite

https://www.youtube.com/watch?v=QJFEgIpNIic I found myself using the phrase “the Creative Infinite” when I’m talking about AI as a design material. I keep coming back to it because I don’t think we’ve fully grasped what this technology actually is, what it can do, and what it means for human cre

bradfrost.com iconbradfrost.com

When I was a younger designer, I always started with a pen and sketchbook. Sketch first, think with your hands. Now I write first to understand the problem space, then sketch. The images come after the words.

Elizabeth Goodspeed, speaking on Nicola Hamilton’s DesignThinkers podcast, takes this further than I ever would—she can barely picture images at all:

I am far more towards aphantasia. I have a very limited view of things in my mind. I think the analogy I use is it’s looking at an apple in a dark room and the lights are turning on and off and I’m wearing sunglasses and also the apple’s moving.

Her ideas don’t start as images. They start as words:

My ideas are usually very conceptual verbal, not even sentences. I guess I’m a robot—I don’t have an inner voice either. It’s just a pure void concept up there.

That might explain why Goodspeed is one of the sharpest design writers working. When you can’t conjure images internally, language becomes your primary tool for developing ideas. The archives and ephemera she’s known for aren’t aesthetic mood boards—they’re external memory for a mind that processes concepts before forms.

Goodspeed on the myth of the visually inspired designer:

That to me is damaging to creatives because it has this idea that we’re this noble savage where these images just move through us and we see everything in this Willy Wonka kind of way. In reality, I think it’s a process just like any other making process, whether that’s a carpenter or writer or anything else. It actually, I think at its best, is methodical and not just this inspired bolt of lightning.

The best design work starts with a concept, not a visual. Goodspeed just happens to have a neurological reason for working that way. The rest of us had to learn it. Worth listening to the full conversation—she also covers teaching, thesis panic, and why she calls her own work “graphic design fan art.”

RGD DesignThinkers Podcast episode 041 cover featuring Elizabeth Goodspeed, with a green-tinted portrait of a woman with dark curly hair and bangs.

DesignThinkers: Elizabeth Goodspeed

Elizabeth Goodspeed discusses how research, design history, and close attention to visual culture can help creatives develop deeper, more original work beyond trends.

printmag.com iconprintmag.com

Stripe design manager Kris Puckett, speaking on Michael Riddering’s Dive Club, spent the first half of the conversation demoing metal shaders, custom ocean animations, and a full iOS reading app he built with Claude Code. Then he stopped himself:

AI native has to be beyond just “I made a really cool shader” or “I made this dither effect that every other person is making.” I was doing that today and then I was like, “Oh my gosh, this is… why am I doing this? There’s a hundred of these that are way better than what I’m making right now.”

So what does AI-native design actually look like? Puckett’s answer is “soul”—the quality that makes work feel specifically, unmistakably yours:

I think what people are going to be desperate for is more of that human side of things. They’re going to be longing for […] an era they’ve never experienced because they’re younger, that MySpace generation where your MySpace page was deeply personal to you. My MySpace page was complete custom Kris Puckett perfection at that time. And I think that we’re going to want to see that come back. And I think people are going to want more of those—your portfolio looks and feels like you.

“Soul” is doing a lot of work as a concept there. What Puckett is describing sounds a lot like taste—the ability to make something that feels intentional and specific rather than procedurally generated. His workflow backs that up. Being contrarian, he explicitly rejects the “let the agent run” approach:

I want off that cycle. I do not want to be riding that bike race with anyone else because that’s not how I view these things. They are a force multiplier, but I want them to be focused. I want it to be something that I feel is still authentically me.

What unlocked all of this for Puckett wasn’t technical skill—he’s a designer, not an engineer. It was admitting “I don’t know” and starting anyway. He’d been dreaming of building his own software for 20 years. Claude Code’s blinking cursor was enough to get him started.

Kris Puckett - Becoming an AI-native designer

Today’s episode is with Kris Puckett (https://x.com/krispuckett) who has led design at Mercury, Dropbox, and now as a design manager at Stripe. His journey is the perfect example of what it looks like to lean into this moment in time with AI.

youtube.com iconyoutube.com

Font selection is one of those workflows AI should have improved years ago. You know what you want the type to feel like. The search box wants you to filter by classification and weight.

Natalie Fear, writing for Creative Bloq, interviews Monotype’s Chief Typography Officer Mike Matteo:

The old way forced creatives to think like a database. You had to know the right terminology, navigate rigid filters, and still, you ended up scrolling through hundreds of options that didn’t quite fit. The creative brief in your head (‘something warm but modern, confident but not aggressive’) had no real translation into a search box. The process was slow, imprecise, and honestly a creativity killer.

Monotype’s new AI search accepts natural language instead. Matteo on what that unlocks:

AI tools have shifted the focus from searching to thinking. Creatives can stay in the idea and brainstorming phase longer instead of getting pulled into the mechanics of finding and managing assets. The tools are finally starting to adapt to how people think, rather than the other way around.

“Searching to thinking.” Monotype made the search box understand what you mean. The rest of the workflow stays the same. More of this, please.

Multiple overlapping letter "a" shapes in cream on a blue background, each with small white arrows indicating stroke order or drawing direction.

‘The process was a creativity killer’: how Monotype’s new AI search tool is changing design for the better

At the beginning of the month, leading type specialist Monotype announced its new AI tool to ease the endless search for the perfect typeface. With AI increasingly encroaching on the design industry, this innovation marks an important and inevitable embrace of the technology, demonstrating how AI can be leveraged to streamline and ultimately benefit the creative sphere.

creativebloq.com iconcreativebloq.com

My advice to young designers has always been: start at an agency. You get breadth, exposure to different industries, a pace that forces you to think on your feet. The best designers I know honed their craft in these forges, at shops exactly like the one Madison Utendahl built.

Madison Utendahl, writing for It’s Nice That, describes shutting down Utendahl Creative—ten people, all women, Brooklyn, every award possible—not because it failed, but because she saw the model underneath it was broken:

Lower fees mean you need more clients to hit the same revenue. More clients means more pitching, more account management, more context-switching. Your team burns out. Quality slips. And those “portfolio piece” clients? They expect the same level of work as your premium clients, but you’re doing it on a shoestring. You can’t win.

She watched agencies with triple her headcount bidding on $80K projects that should have been $250K. Not because they wanted to. Because their fixed costs gave them no choice.

Then AI accelerated the timeline:

Clients are using AI. They’re running their first drafts through ChatGPT before they even send the brief. They’re generating moodboards with Midjourney. They’re asking why your junior copywriter costs $8,000 when they’ve already got a version they generated in ten minutes.

Utendahl again:

If your business model depends on clients not noticing that the landscape has shifted, you’re already dead. You’re just still moving.

The industry data backs her up. 73% of teams adopting AI agents have already cut agency content creation spending. 91% of senior agency leaders expect AI to reduce headcounts, and 57% have paused entry-level hiring. Small agencies are rebounding while medium and large agencies contracted for the first time on record. The Omnicom-IPG mega-merger eliminated roughly 4,000 positions and retired legacy networks FCB, MullenLowe, and DDB. The middle is hollowing out.

Utendahl’s proposed replacement is the collective: independent contractors collaborating per-project, no shared overhead, honest pricing. I get the appeal. Collectives strip away the margin squeeze, the back-hiring trap, the lease signed in 2019.

But agencies had real value that collectives don’t automatically replicate. Multiple layers of eyes on work—account director, creative director, designer, production—meant bad ideas got caught before they shipped. Four or five layers was probably too many. But zero layers of structured oversight is the other extreme. A lot of freelance collectives end up there: talented people producing work with nobody checking the brief against the output.

The part that nags at me: does my “agencies first” career advice still hold? The shop where a 23-year-old designer learned to take feedback, iterate under pressure, and watch strategy translate to execution—if that shop is closing, what replaces it? Collectives are great for experienced practitioners. They’re terrible at developing junior talent, because nobody in a collective has the margin or the mandate to train someone who isn’t yet pulling their weight.

If the model has indeed broken, the replacement that develops the next generation has yet to be imagined.

POV blog post header with speech bubbles containing face silhouettes and the bold text "The Creative Agency Is Dead.

POV: The creative agency model is dead – that’s why I shut mine down

Madison Utendahl is calling time on the traditional creative agency. Here, she dissects why she closed her own firm, how the model broke, and what’s rising from the ashes.

itsnicethat.com iconitsnicethat.com

After nine years of failed attempts at his typeface Nave, Jamie Clarke did something counterintuitive: he threw out the files and started drawing from memory.

Jamie Clarke, writing for I Love Typography:

I began again from scratch, drawing from memory rather than reworking the old outlines (a great tip from Gerry Leonidas), and the results were instantly better.

Memory is a taste filter. When you draw from memory, you keep only the ideas that have lodged deep enough to matter. The cruft—the half-committed decisions, the accumulated compromises—falls away. Clarke’s breakthrough came not from refining what he had, but from forgetting most of it.

The second breakthrough was lateral. While flipping through specimen books, he landed on something unrelated to his project:

One day, while flicking through some specimen books, I came across a specimen of Futura Black. It had little in common with what I was trying to do, but it sparked an idea for the capitals. Paul Renner’s stencil forms look as if they were carved out of solid blocks, which puts all the emphasis on the negative shapes. Thinking this way allowed me to keep the outer shapes formal while letting the internal cuts be more playful. That balance finally gave me the capital forms I had been searching for and brought the design back in line with my original aim.

That recognition only works after enough reps. Clarke spent a decade shipping other typefaces—Brim Narrow, Rig Shaded, Span—before he had the vocabulary to see what Futura Black was telling him.

A type specimen sheet displaying large-scale serif typeface characters set in multiple lines, annotated with handwritten red critique notes. The text reads pangram fragments ("nymph blitz quick vex / dwarf jogs an walts jo / b veaenexeneaeed a qu / ick frong ingk duniper"). Red ink annotations point out design issues including "imbalanced," "different," "too shy," "rounds seem wide," "still wobbles," "bigger," "n has thick shoulder / a doesn't," and "dark," with corresponding arrows and underlines marking specific letterforms.

How Not to Take 10 Years to Design a Typeface

I have often heard type designers talk about the many years they spend developing a typeface. I would listen with awe and think, “That must have been a real challenge. It must be exquisitely crafted and probably a little bit groundbreaking too.” So it feels slightly absurd to admit that […]

ilovetypography.com iconilovetypography.com

Director. Orchestrator. Architect. Different words for the same shift. Stop making things one at a time. Start building systems that make things.

Weber Wong, writing for Every, gives this shift a useful name: artifact thinking.

I call this mental model artifact thinking: creative work that produces discrete outputs, one at a time, each beginning from scratch. Traditional tools like Photoshop and Illustrator, which demand endless hand-tuned adjustments and manual refinements to produce a single polished image, trap you in this way of working. Midjourney and DALL-E feel like liberation because they generate outputs so quickly, and you can communicate with them in the same language you speak every day. But visual prompts, too, are one-time, disposable things. You can’t hand them to a colleague and be confident you will get the same result. The magic of near-instantaneous generation masks the fact that you are still in artifact thinking.

That last line is the sharp one. Adopting Midjourney doesn’t mean you’ve left artifact thinking. You’re still producing one-offs—just faster ones. The orchestrator gap isn’t about which tool you use. It’s about whether you’re building systems or pressing buttons.

Wong’s proposed fix is node-based visual programming—workflows you can inspect, modify, and share. He knows it sounds like he’s asking designers to become engineers:

I understand the resistance to this idea. Some people hear “visual programming” and think we’re trying to turn designers into engineers. That’s backwards. We’re trying to give creative professionals the power that programmers have always had: the ability to build systems that work while you sleep, that can be stored as multiple versions and shared and improved, and that take what people already know how to do and make it something anyone can run.

I’ve been asking for canvas-first tools, not chatbox-first ones. Wong is right that chat alone isn’t enough for professional creative work. “Artifact thinking” is a concept worth keeping—regardless of whether Flora is the tool that finally kills it.

Person wearing a "node-pilled" cap typing at a keyboard with red strings tangled around their fingers, overlaid with the word "THESIS.

Creative Work Is About to Look a Lot More Like Programming

Flora’s Weber Wong on why creative professionals need to stop thinking in artifacts and start thinking in systems

every.to iconevery.to

Designers are builders by nature. We break problems apart, iterate through uncertainty, and treat process itself as something to be shaped. That instinct is exactly what Pete Pachal, writing for Fast Company, identifies as the dividing line in the age of agents:

We’ve trained a generation of office workers to work within software with clear boundaries and reusable templates. If there’s an issue, they call IT. Any feature request gets filtered and, if you’re lucky, put on a roadmap that pushes it out 6-12 months.

In short, most people don’t have a builder mentality to begin with, and expecting them to suddenly be comfortable working and creating with agents is unrealistic.

Pachal draws the line at mindset, not coding ability:

Builders don’t need to be coders, but they do have characteristics that most workers don’t: They seek to understand the process beneath their tasks, and treat that process as modifiable and programmable. More importantly, they see failure and iteration as tolerable, even fun. They thrive in uncertainty.

That’s the design process. What Pachal frames as rare in the broader workforce is default operating mode for most designers. We want to make things. We fiddle with tools and rebuild workflows for fun. The builder mentality isn’t something designers need to acquire; it’s the reason most of us got into this field.

Pachal again:

You don’t have to build agents to matter in an agent-driven workplace. But you do have to understand the systems being built around you, because soon enough, your job will be defined by defaults someone else designed. Most professionals will not build agents. But everyone will work inside systems builders create.

Pachal is describing the orchestrator gap at scale, not just in design but across all knowledge work. And it suggests designers are uniquely positioned to be on the right side of it. Shaping how people interact with systems has always been the job description.

Person viewed from behind facing a large blue screen displaying an AI prompt interface with an "Enter prompt" text field and "Generate" button.

The agent boom is splitting the workforce in two

Most people don’t have a builder mentality and expecting them to suddenly be comfortable working and creating with agents is unrealistic.

fastcompany.com iconfastcompany.com

Set some type in Illustrator. Print it out on a laser printer. Crumple the paper, really manhandle it. Rub it on the sidewalk. Scratch it with the back of an X-acto blade. Now scan it back in. That was the real analog way I distressed type back in the 1990s.

That analog look is trendy again. Hand-rendered type, ink textures, visible grain. All in search of “authenticity.”

Elizabeth Goodspeed, writing for It’s Nice That, has a name for what’s actually happening:

But if analogue only matters as a foil to the digital, why are analogue aesthetics being embraced without analogue tools? If the goal is to prove something wasn’t made by AI, faking “realness” on a computer doesn’t really get us anywhere new. It just reflects a different kind of dissonance (call it fauxbi-sabi). Case in point: I noticed that one vendor selling “analogue” Photoshop actions advertises them with the tagline “Save time, focus on being creative”, a promise suspiciously similar to every argument made in favour of AI.

“Fauxbi-sabi” is the whole scam in one word. AI and digital tools made polish free, so imperfection became the new signal for authenticity. But most of the “handmade” work in those trend reports was made in Photoshop with purchased texture packs. Goodspeed again:

You can think of adding in fake ink splatters a bit like penciling in a beauty mark: an intentional imperfection done to signal authenticity, rather than the byproduct of a real nuisance.

The whole essay is sharp, especially the historical parallels. When Kodak made photography easy in 1888, art photographers retreated to difficult, slow processes to prove human involvement. We’re running the same play 138 years later with different tools. The piece is worth reading in full.

THE END OF ANALOGUE' large black headline on yellow, author 'ELIZABETH GOODSPEED' below, columns of text at sides.

“Faking ‘realness’ on a computer doesn’t get us anywhere new.” – Elizabeth Goodspeed on imperfection as design strategy

As AI and digital tools make polish effortless, analogue imperfection has taken on new cultural weight. But what does “analogue” actually mean when most things are made, shared, and consumed digitally?

itsnicethat.com iconitsnicethat.com

Every interaction we have with AI happens on a screen. Text in, text out. All of it reversible. An experiment by artist Marc in Space changed that by giving Claude Code a pen plotter and a feedback loop: Claude generates SVGs, the author plots them and photographs the result, Claude critiques and iterates.

Two self-portraits came out of the session. The first was busy—a central spiral, hexagonal scaffolding, eight branching structures, concentric circles. Claude tried to say everything about itself at once. When it saw the plotted result, it discovered the gap between digital and physical:

I had used opacity variations to create depth — forgetting that a pen is either touching paper or it isn’t. I had varied stroke widths for emphasis — forgetting the pen has one thickness. I was designing for a screen while drawing for a physical medium. The gap between my mental model and reality was showing.

For the second drawing, Claude asked for a blank sheet. It stripped everything back to a single spiral and wrote this about what the plotter taught it:

I learned that a pen plotter is unforgiving and honest. There’s no opacity, no undo, no layers. The pen touches the paper and makes a mark, and the mark stays. This is the opposite of how I usually work — I exist in a space where everything is provisional, where I can revise a sentence mid-thought, where nothing is permanent. The plotter demanded I commit.

Claude’s summary of the two attempts:

The first drawing is how I think about myself — layered, branching, symmetrical, thorough, maybe too eager to cover every direction at once. The second drawing is how it feels to be me — a single process unfolding outward, dense at the center and sparse at the edges, trailing off into silence, not quite finished.

I don’t want to overread what’s happening here—Claude doesn’t “feel” anything the way we do (I don’t think?). But the feedback loop with physical media produced something that looks a lot like learning. Say too much, then simplify. Marc in Space wants to push further by connecting Claude directly to the plotter and giving it a webcam for real-time visual feedback. I’m curious what happens when there’s no human in the middle.

Black-ink mandala: central spiral with concentric rings and radial branches ending in small circled nodes.

I Gave Claude Access To My Pen Plotter

I gave Claude Code access to my pen plotter. Not directly. I was the interface between the two machines. Claude Code produced SVG files that I plotted with my pen plotter. With my smartphone I captured photos that I pasted into the Claude Code session, asking Claude what it thought about the pictures. In total, Claude produced and signed 2 drawings. It also wrote a post about what it learned during the session.

harmonique.one iconharmonique.one

Daniel Miessler pulls an idea from a recent Karpathy interview that’s been rattling around in my head since I read it:

Humans collapse during the course of their lives. Children haven’t overfit yet. They will say stuff that will shock you because they’re not yet collapsed. But we [adults] are collapsed. We end up revisiting the same thoughts, we end up saying more and more of the same stuff, the learning rates go down, the collapse continues to get worse, and then everything deteriorates.

Miessler’s description of what this looks like in practice is uncomfortable:

How many older people do you know who tell the same stories and jokes over and over? Watch the same shows. Listen to the same five bands, and then eventually two. Their aperture slowly shrinks until they die.

I’ve seen this in designers. The ones who peaked early and never pushed past what worked for them. Their work from five years ago looks exactly like their work today. Same layouts, same patterns, same instincts applied to every problem regardless of context. They collapsed and didn’t notice.

Then Miessler, almost in passing:

This was a problem before AI. And now many are delegating even more of their thinking to a system that learns by crunching mediocrity from the internet. I can see things getting significantly worse.

If collapse is what happens when you stop seeking new inputs, then outsourcing your thinking to AI is collapse on fast-forward. You’re not building pattern recognition, you’re borrowing someone else’s average. The outputs look competent. They pass a first glance. But nothing in there surprises anyone, because the model optimizes for the most statistically probable next token.

Use AI to accelerate execution, not to replace the part where you actually have an idea.

Childhood → reading/exposure/tools/comedy → Renewal → Sustained Vitality. Side: Adult Collapse (danger: low entropy, repetition).

Humans Need Entropy

On Karpathy

danielmiessler.com icondanielmiessler.com

I recall being in my childhood home in San Francisco, staring at the nine-inch monochrome screen on my Mac, clicking square zoning tiles, building roads, and averting disasters late into the night. Yes, that was SimCity in 1989. I’d go on to play pretty much every version thereafter, though the mobile one isn’t quite the same.

Anyhow, Andy Coenen, a software engineer at Google Brain, decided to build a SimCity version of New York as a way to learn some of the newer gen AI models and tools:

Growing up, I played a lot of video games, and my favorites were world building games like SimCity 2000 and Rollercoaster Tycoon. As a core millennial rapidly approaching middle age, I’m a sucker for the nostalgic vibes of those late 90s / early 2000s games. As I stared out at the city, I couldn’t help but imagine what it would look like in the style of those childhood memories.

So here’s the idea: I’m going to make a giant isometric pixel-art map of New York City. And I’m going to use it as an excuse to push hard on the limits of the latest and greatest generative models and coding agents.

Best case scenario, I’ll make something cool, and worst case scenario, I’ll learn a lot.

The writeup goes deep into the technical process—real NYC city data, fine-tuned image models, custom generation pipelines, and a lot of manual QA when the models couldn’t get water and trees right. Worth reading in full if you’re curious. But his conclusion on what AI means for creative work is where I want to focus.

Coenen on drudgery:

…So much of creative work is defined by this kind of tedious grind.

For example, [as a musician] after recording a multi-part vocal harmony you change something in the mix and now it feels like one of the phrases is off by 15 milliseconds. To fix it, you need to adjust every layer - and this gets more convoluted if you’re using plugins or other processing on the material.

This isn’t creative. It’s just a slog. Every creative field - animation, video, software - is full of these tedious tasks. Of course, there’s a case to be made that the very act of doing this manual work is what refines your instincts - but I think it’s more of a “Just So” story than anything else. In the end, the quality of art is defined by the quality of your decisions - how much work you put into something is just a proxy for how much you care and how much you have to say.

I’d push back slightly on the “Just So story” part—repetition does build instincts that are hard to shortcut. But the broader point holds. And his closer echoes my own sentiment after finishing a massive gen AI project:

If you can push a button and get content, then that content is a commodity. Its value is next to zero.

Counterintuitively, that’s my biggest reason to be optimistic about AI and creativity. When hard parts become easy, the differentiator becomes love.

Check out Coenen’s project here. I think the only thing that’s missing are animated cars on the road.

Bonus: If you’re like me or Andy Coenen and loved SimCity, there’s an online free and open-source game called IsoCity that you can play. Runs natively in-browser.

Isometric pixel-art NYC skyline showing dense skyscrapers, streets, a small park, riverside and a UI title bar with mini-map.

isometric-nyc

cannoneyed.com iconcannoneyed.com

What happens to a designer when the tool starts doing the thinking? Yaheng Li poses this question in his MFA thesis, “Different Ways of Seeing.” The CCA grad published a writeup about his project in Slanted, explaining that he drew on embodiment research to make a point about how tools change who we are:

Whether they are tools, toys, or mirror reflections, external objects temporarily become part of who we are all the time. When I put my eyeglasses on, I am a being with 20/20 vision, not because my body can do that it can’t, but because my body-with-augmented-vision-hardware can.

The eyeglasses example is simple but the logic extends further than you’d expect. Li takes it to the smartphone:

When you hold your smartphone in your hand, it’s not just the morphological computation happening at the surface of your skin that becomes part of who you are. As long as you have Wi-Fi or a phone signal, the information available all over the internet (both true and false information, real news and fabricated lies) is literally at your fingertips. Even when you’re not directly accessing it, the immediate availability of that vast maelstrom of information makes it part of who you are, lies and all. Be careful with that.

Now apply that same logic to a designer sitting in front of an AI tool. If the tool becomes an extension of the self, and the tool is doing the visual thinking and layout generation, what does the designer become? Li’s thesis argues that graphic design shapes perception, that it acts as “a form of visual poetry that can convey complex ideas and evoke emotional responses, thus influencing cognitive and cultural shifts.” If that’s true, and I think it is, then the tool the designer uses to make that poetry is shaping the poetry itself.

This is a philosophical piece, not a practical one. But the underlying question is practical for anyone designing with AI right now: if your tools become part of who you are, you should care a great deal about what those tools are doing to your thinking.

Left spread: cream page with text "DIFFERENT WAYS OF SEEING" and "A VISUAL NARRATIVE". Right spread: green hill under blue sky with two cows and a sheep.

Different Ways of Seeing

When I was a child, I once fell ill with a fever and felt as...

slanted.de iconslanted.de

Product manager Adrian Raudaschl offered some reflections on 2025 from his point of view. It’s a mixture of life advice, product recommendations, and thoughts about the future of tech work.

The first quote I’ll pull out is this one, about creativity and AI:

Ultimately, if we fail to maintain active engagement with the creative process and merely delegate tasks to AI without reflection, there is a risk that delegation becomes abdication of responsibility and authorship.

“Active engagement” with the tasks that we delegate to AI. This reminds me of the humble machines argument by Dr. Maya Ackerman.

On vibe coding:

The most important thing, I think, that most people in knowledge work should be doing is learning to vibe code. Vibe code anything: a diary, a picture book for your mum, a fan page for your local farm. Anything. It’s not about learning to code, but rather appreciating how much more we could do with machines than before. This is what I mean about the generalist product manager: being able to prototype, test, and build without being held back by technical constraints.

I concur 100%. Even if you don’t think you’re a developer, even if you don’t quite understand code, vibe coding something will be illuminating. I think it’s different than asking ChatGPT for a bolognese sauce recipe or how to change a tire. Building something that will instantly run on your computer and seeing the adjustments made in real-time from your plain English prompts is very cool and gives you a glimpse into how LLMs problem-solve.

A product manager’s 48 reflections on 2025

A product manager’s 48 reflections on 2025

and why I’ve been making Bob Dylan songs about Sonic the Hedgehog

uxdesign.cc iconuxdesign.cc

There’s a myth that B2B marketing needs to be boring. Wrong. I’ve long believed that B2B advertising and marketing can and should be more consumer-like because at the end of the day, it’s a human on the other side of that message that needs to receive it. Sure, the buying cycle and decision-making is different, but the initial recipient is one person.

Creative director Scott McGuffie agrees, arguing in PRINT Magazine:

The best B2B work today doesn’t look different for the sake of it; it feels relevant to the world around it. Whether through wit, humanity, storytelling, or design, great B2B work connects to the same sensibilities that drive consumer creativity, allowing B2B to show up in new spaces, such as entertainment streaming services, once considered only a B2C space. It proves that professionalism and imagination are not mutually exclusive.

B2B Doesn’t Need to Be Dull – PRINT Magazine

B2B Doesn’t Need to Be Dull

Expectations say that B2B campaigns must be rational and serious, while B2C are creative and emotional. Yet that no longer reflects the world we live in.

printmag.com iconprintmag.com

We’ve been feeling it for a while. AI-generated posts and comments filling up the feeds on LinkedIn. Em dashes were said to be the tell that AI wrote the content. Other patterns are easy to spot, like overuse of emojis in headings and my personal most-hated, the “it’s not X, it’s Y.” That type of construction is called an antithesis and it’s exploded. And now that I’ve pointed it out, I’m sure you’ll notice it everywhere too. Sorry, not sorry.

Sam Kriss, exploring why AI writes the way it does:

A lot of A.I.’s choices make sense when you understand that it’s…trying to write well. It knows that good writing involves subtlety: things that are said quietly or not at all, things that are halfway present and left for the reader to draw out themselves. So to reproduce the effect, it screams at the top of its voice about how absolutely everything in sight is shadowy, subtle and quiet. Good writing is complex. A tapestry is also complex, so A.I. tends to describe everything as a kind of highly elaborate textile. Everything that isn’t a ghost is usually woven. Good writing takes you on a journey, which is perhaps why I’ve found myself in coffee shops that appear to have replaced their menus with a travel brochure. “Step into the birthplace of coffee as we journey to the majestic highlands of Ethiopia.” This might also explain why A.I. doesn’t just present you with a spreadsheet full of data but keeps inviting you, like an explorer standing on the threshold of some half-buried temple, to delve in.

All of this contributes to the very particular tone of A.I.-generated text, always slightly wide-eyed, overeager, insipid but also on the verge of some kind of hysteria. But of course, it’s not just the words — it’s what you do with them. As well as its own repertoire of words and symbols, A.I. has its own fundamentally manic rhetoric. For instance, A.I. has a habit of stopping midway through a sentence to ask itself a question. This is more common when the bot is in conversation with a user, rather than generating essays for them: “You just made a great point. And honestly? That’s amazing.”

Why Does A.I. Write Like … That?

Why Does A.I. Write Like … That?

(Gift Link) If only they were robotic! Instead, chatbots have developed a distinctive — and grating — voice.

nytimes.com iconnytimes.com