Skip to content

35 posts tagged with “creativity”

Silicon Valley’s pitch to designers is that AI is the more knowledgeable partner now, so they should get good at prompting it. Write better instructions, get better output.

Peter Zakrzewski, writing for UX Collective, pushes back:

The current Silicon Valley pitch to designers is essentially this: AI is your MKO now. It knows more patterns than you do. It executes faster than you do. It can code. Your job is to learn how to give it good instructions — to become a fluent prompter of a more capable system. I want to challenge that framing directly.

His challenge starts with a concrete test. He asked three leading AI systems to render a dining table with a concrete slab top resting on dry spaghetti legs, then show the scene five seconds after the legs gave way. All three rendered the impossibility with total confidence. None could feel that the physics don’t work.

That test illustrates what Zakrzewski calls the Inversion Error:

We have built a Symbolic Giant resting on an Enactive Void. These systems can write about gravity with technical or even poetic fluency but cannot feel it. They can describe a structure but cannot tell you whether it will stand or fall. The ground is shaking because the floor is missing.

“Symbolic Giant resting on an Enactive Void” is a mouthful, but the floor metaphor does the work: AI’s language fluency masks a total absence of spatial, embodied reasoning. The kind designers rely on every day without naming it. Zakrzewski on what that means for the prompting pitch:

Designers do not think primarily in sentences. Our human cognition is deeply embodied. We think in diagrams, in spatial relationships, in load paths and sight lines and in the non-discursive logic of things that must connect to other things in three-dimensional space. […] We are being asked to compress years of embodied cognition and our three-dimensional spatial judgment into a text prompt and then accept whatever the machine generates as an adequate rendering of our intent. We are, in other words, being asked to abandon the very capability that the AI lacks and that our projects require.

When someone tells designers to compress spatial judgment into a text prompt, they’re asking designers to throw away the one capability AI genuinely lacks and the one we’re genuinely great at.

There was a theme to some of the posts on this blog last week—about how words should come before the pixels. I made a similar argument in the newsletter: the work is getting more verbal and conceptual, but the eye stays. Zakrzewski makes the case for what words alone can’t carry: the spatial, embodied judgment that tells you whether the thing will actually stand.

A mechanical robotic hand reaching upward against a stormy sky, overlaid with a bold red banner reading "Form follows nothing.

The ground is shaking: Why designers must flip the script on AI

Something has shifted in the way the design field operates, and I think most of us can sense it even if we haven’t yet found the words or…

uxdesign.cc iconuxdesign.cc

I’ve written before about the shokunin mentality: design is a lifelong practice, and the identity you build around craft is a source of resilience, not a liability.

Dora Czerna, writing for UX Collective, identifies why AI disruption hits designers harder than most:

Design, like writing or art, tends to get tangled up with identity. It’s not just a job; it’s a way of seeing the world, a source of status, a thing that makes you you. When a tool arrives that can approximate your output, it doesn’t just threaten your income. It can threaten your sense of self.

Czerna is describing a vulnerability. I’d call it the advantage. The designers who survived the DTP revolution, the ones who made the leap from paste-up boards to Quark and PageMaker, weren’t the ones who shrugged and said “it’s just a job.” They were the ones who cared enough about the craft to learn the new tools and drag their high standards into the new medium.

Czerna gets at why that caring matters:

The pattern isn’t that expertise becomes worthless. It’s that expertise gets unbundled from the tasks that used to contain it. When a tool automates the mechanical parts of a job, what remains is the sensibility that guided those mechanics in the first place. The typesetter’s eye for spacing didn’t disappear when PageMaker arrived; it became the designer’s eye for spacing, operating at a higher level of abstraction.

That sensibility is what identity protects. When you see design as who you are, you follow the craft wherever the tools take it. When you see it as what you do, you’re more likely to stop when the tasks change. Czerna’s article is a thorough historical walk through disruption’s recurring shape, and it’s worth your time.

Illustration of a vintage printing press connected by cables to geometric shapes and a retro Macintosh-style computer, symbolizing the evolution of publishing.

Disruption has a shape. Design history shows us what it is.

Democratisation, panic, quality collapse, then new norms emerging. This isn’t new terrain.

uxdesign.cc iconuxdesign.cc

Most AI tools start with a blank chatbox. OK, maybe not completely blank. Often there is a gallery of examples right below the input. But it’s still hard to come up with something original when faced with a blinking cursor.

Brad Frost calls this moment “the Creative Infinite”:

Never before in human history has it been possible for anyone to simply ask for something to exist, and then it just…exists. Where the inputs can be anything, the outputs can be anything, and the whole process can be repeated, iterated, combined, translated, and chained together indefinitely.

He makes the case concrete with his 8-year-old daughter:

In 5 minutes, Ella vibe-coded a playable game (built in Three.js via Claude Cowork) running in the browser. That’s just bonkers. At no point in human history has it been possible to simply describe a game in words and then just… play it 5 minutes later.

A Michael McDonald (interesting taste for an 8 year-old!) penguin adventure, because she knew it would make her dad laugh. The capability is real and the story is delightful. But then Frost hedges:

Your existing creative fluency still matters, maybe even more than before? Just as being able to play piano puts you in a better spot to wield a synthesizer. Knowing how to design makes you better at prompting visual tools. Understanding code makes you better at architecting what you want to build with AI. Craft. Taste. Art. Authentic expression. Purpose.

Yes, creative fluency matters more. It absolutely does. The piano-to-synthesizer analogy is exactly right: the tool revolutions I’ve lived through have compounded on existing skill, not replaced it. A designer who understands visual hierarchy and restraint will direct AI better than someone who’s never thought about why one layout works and another doesn’t.

A music trivia game scene with cartoon penguins and block-figure players surrounding a "Michael McDonald" stage, with an orange tooltip reading "I named my CAT after this man!!

The Creative Infinite

https://www.youtube.com/watch?v=QJFEgIpNIic I found myself using the phrase “the Creative Infinite” when I’m talking about AI as a design material. I keep coming back to it because I don’t think we’ve fully grasped what this technology actually is, what it can do, and what it means for human cre

bradfrost.com iconbradfrost.com

When I was a younger designer, I always started with a pen and sketchbook. Sketch first, think with your hands. Now I write first to understand the problem space, then sketch. The images come after the words.

Elizabeth Goodspeed, speaking on Nicola Hamilton’s DesignThinkers podcast, takes this further than I ever would—she can barely picture images at all:

I am far more towards aphantasia. I have a very limited view of things in my mind. I think the analogy I use is it’s looking at an apple in a dark room and the lights are turning on and off and I’m wearing sunglasses and also the apple’s moving.

Her ideas don’t start as images. They start as words:

My ideas are usually very conceptual verbal, not even sentences. I guess I’m a robot—I don’t have an inner voice either. It’s just a pure void concept up there.

That might explain why Goodspeed is one of the sharpest design writers working. When you can’t conjure images internally, language becomes your primary tool for developing ideas. The archives and ephemera she’s known for aren’t aesthetic mood boards—they’re external memory for a mind that processes concepts before forms.

Goodspeed on the myth of the visually inspired designer:

That to me is damaging to creatives because it has this idea that we’re this noble savage where these images just move through us and we see everything in this Willy Wonka kind of way. In reality, I think it’s a process just like any other making process, whether that’s a carpenter or writer or anything else. It actually, I think at its best, is methodical and not just this inspired bolt of lightning.

The best design work starts with a concept, not a visual. Goodspeed just happens to have a neurological reason for working that way. The rest of us had to learn it. Worth listening to the full conversation—she also covers teaching, thesis panic, and why she calls her own work “graphic design fan art.”

RGD DesignThinkers Podcast episode 041 cover featuring Elizabeth Goodspeed, with a green-tinted portrait of a woman with dark curly hair and bangs.

DesignThinkers: Elizabeth Goodspeed

Elizabeth Goodspeed discusses how research, design history, and close attention to visual culture can help creatives develop deeper, more original work beyond trends.

printmag.com iconprintmag.com

Stripe design manager Kris Puckett, speaking on Michael Riddering’s Dive Club, spent the first half of the conversation demoing metal shaders, custom ocean animations, and a full iOS reading app he built with Claude Code. Then he stopped himself:

AI native has to be beyond just “I made a really cool shader” or “I made this dither effect that every other person is making.” I was doing that today and then I was like, “Oh my gosh, this is… why am I doing this? There’s a hundred of these that are way better than what I’m making right now.”

So what does AI-native design actually look like? Puckett’s answer is “soul”—the quality that makes work feel specifically, unmistakably yours:

I think what people are going to be desperate for is more of that human side of things. They’re going to be longing for […] an era they’ve never experienced because they’re younger, that MySpace generation where your MySpace page was deeply personal to you. My MySpace page was complete custom Kris Puckett perfection at that time. And I think that we’re going to want to see that come back. And I think people are going to want more of those—your portfolio looks and feels like you.

“Soul” is doing a lot of work as a concept there. What Puckett is describing sounds a lot like taste—the ability to make something that feels intentional and specific rather than procedurally generated. His workflow backs that up. Being contrarian, he explicitly rejects the “let the agent run” approach:

I want off that cycle. I do not want to be riding that bike race with anyone else because that’s not how I view these things. They are a force multiplier, but I want them to be focused. I want it to be something that I feel is still authentically me.

What unlocked all of this for Puckett wasn’t technical skill—he’s a designer, not an engineer. It was admitting “I don’t know” and starting anyway. He’d been dreaming of building his own software for 20 years. Claude Code’s blinking cursor was enough to get him started.

Kris Puckett - Becoming an AI-native designer

Today’s episode is with Kris Puckett (https://x.com/krispuckett) who has led design at Mercury, Dropbox, and now as a design manager at Stripe. His journey is the perfect example of what it looks like to lean into this moment in time with AI.

youtube.com iconyoutube.com

Font selection is one of those workflows AI should have improved years ago. You know what you want the type to feel like. The search box wants you to filter by classification and weight.

Natalie Fear, writing for Creative Bloq, interviews Monotype’s Chief Typography Officer Mike Matteo:

The old way forced creatives to think like a database. You had to know the right terminology, navigate rigid filters, and still, you ended up scrolling through hundreds of options that didn’t quite fit. The creative brief in your head (‘something warm but modern, confident but not aggressive’) had no real translation into a search box. The process was slow, imprecise, and honestly a creativity killer.

Monotype’s new AI search accepts natural language instead. Matteo on what that unlocks:

AI tools have shifted the focus from searching to thinking. Creatives can stay in the idea and brainstorming phase longer instead of getting pulled into the mechanics of finding and managing assets. The tools are finally starting to adapt to how people think, rather than the other way around.

“Searching to thinking.” Monotype made the search box understand what you mean. The rest of the workflow stays the same. More of this, please.

Multiple overlapping letter "a" shapes in cream on a blue background, each with small white arrows indicating stroke order or drawing direction.

‘The process was a creativity killer’: how Monotype’s new AI search tool is changing design for the better

At the beginning of the month, leading type specialist Monotype announced its new AI tool to ease the endless search for the perfect typeface. With AI increasingly encroaching on the design industry, this innovation marks an important and inevitable embrace of the technology, demonstrating how AI can be leveraged to streamline and ultimately benefit the creative sphere.

creativebloq.com iconcreativebloq.com

My advice to young designers has always been: start at an agency. You get breadth, exposure to different industries, a pace that forces you to think on your feet. The best designers I know honed their craft in these forges, at shops exactly like the one Madison Utendahl built.

Madison Utendahl, writing for It’s Nice That, describes shutting down Utendahl Creative—ten people, all women, Brooklyn, every award possible—not because it failed, but because she saw the model underneath it was broken:

Lower fees mean you need more clients to hit the same revenue. More clients means more pitching, more account management, more context-switching. Your team burns out. Quality slips. And those “portfolio piece” clients? They expect the same level of work as your premium clients, but you’re doing it on a shoestring. You can’t win.

She watched agencies with triple her headcount bidding on $80K projects that should have been $250K. Not because they wanted to. Because their fixed costs gave them no choice.

Then AI accelerated the timeline:

Clients are using AI. They’re running their first drafts through ChatGPT before they even send the brief. They’re generating moodboards with Midjourney. They’re asking why your junior copywriter costs $8,000 when they’ve already got a version they generated in ten minutes.

Utendahl again:

If your business model depends on clients not noticing that the landscape has shifted, you’re already dead. You’re just still moving.

The industry data backs her up. 73% of teams adopting AI agents have already cut agency content creation spending. 91% of senior agency leaders expect AI to reduce headcounts, and 57% have paused entry-level hiring. Small agencies are rebounding while medium and large agencies contracted for the first time on record. The Omnicom-IPG mega-merger eliminated roughly 4,000 positions and retired legacy networks FCB, MullenLowe, and DDB. The middle is hollowing out.

Utendahl’s proposed replacement is the collective: independent contractors collaborating per-project, no shared overhead, honest pricing. I get the appeal. Collectives strip away the margin squeeze, the back-hiring trap, the lease signed in 2019.

But agencies had real value that collectives don’t automatically replicate. Multiple layers of eyes on work—account director, creative director, designer, production—meant bad ideas got caught before they shipped. Four or five layers was probably too many. But zero layers of structured oversight is the other extreme. A lot of freelance collectives end up there: talented people producing work with nobody checking the brief against the output.

The part that nags at me: does my “agencies first” career advice still hold? The shop where a 23-year-old designer learned to take feedback, iterate under pressure, and watch strategy translate to execution—if that shop is closing, what replaces it? Collectives are great for experienced practitioners. They’re terrible at developing junior talent, because nobody in a collective has the margin or the mandate to train someone who isn’t yet pulling their weight.

If the model has indeed broken, the replacement that develops the next generation has yet to be imagined.

POV blog post header with speech bubbles containing face silhouettes and the bold text "The Creative Agency Is Dead.

POV: The creative agency model is dead – that’s why I shut mine down

Madison Utendahl is calling time on the traditional creative agency. Here, she dissects why she closed her own firm, how the model broke, and what’s rising from the ashes.

itsnicethat.com iconitsnicethat.com

After nine years of failed attempts at his typeface Nave, Jamie Clarke did something counterintuitive: he threw out the files and started drawing from memory.

Jamie Clarke, writing for I Love Typography:

I began again from scratch, drawing from memory rather than reworking the old outlines (a great tip from Gerry Leonidas), and the results were instantly better.

Memory is a taste filter. When you draw from memory, you keep only the ideas that have lodged deep enough to matter. The cruft—the half-committed decisions, the accumulated compromises—falls away. Clarke’s breakthrough came not from refining what he had, but from forgetting most of it.

The second breakthrough was lateral. While flipping through specimen books, he landed on something unrelated to his project:

One day, while flicking through some specimen books, I came across a specimen of Futura Black. It had little in common with what I was trying to do, but it sparked an idea for the capitals. Paul Renner’s stencil forms look as if they were carved out of solid blocks, which puts all the emphasis on the negative shapes. Thinking this way allowed me to keep the outer shapes formal while letting the internal cuts be more playful. That balance finally gave me the capital forms I had been searching for and brought the design back in line with my original aim.

That recognition only works after enough reps. Clarke spent a decade shipping other typefaces—Brim Narrow, Rig Shaded, Span—before he had the vocabulary to see what Futura Black was telling him.

A type specimen sheet displaying large-scale serif typeface characters set in multiple lines, annotated with handwritten red critique notes. The text reads pangram fragments ("nymph blitz quick vex / dwarf jogs an walts jo / b veaenexeneaeed a qu / ick frong ingk duniper"). Red ink annotations point out design issues including "imbalanced," "different," "too shy," "rounds seem wide," "still wobbles," "bigger," "n has thick shoulder / a doesn't," and "dark," with corresponding arrows and underlines marking specific letterforms.

How Not to Take 10 Years to Design a Typeface

I have often heard type designers talk about the many years they spend developing a typeface. I would listen with awe and think, “That must have been a real challenge. It must be exquisitely crafted and probably a little bit groundbreaking too.” So it feels slightly absurd to admit that […]

ilovetypography.com iconilovetypography.com

Director. Orchestrator. Architect. Different words for the same shift. Stop making things one at a time. Start building systems that make things.

Weber Wong, writing for Every, gives this shift a useful name: artifact thinking.

I call this mental model artifact thinking: creative work that produces discrete outputs, one at a time, each beginning from scratch. Traditional tools like Photoshop and Illustrator, which demand endless hand-tuned adjustments and manual refinements to produce a single polished image, trap you in this way of working. Midjourney and DALL-E feel like liberation because they generate outputs so quickly, and you can communicate with them in the same language you speak every day. But visual prompts, too, are one-time, disposable things. You can’t hand them to a colleague and be confident you will get the same result. The magic of near-instantaneous generation masks the fact that you are still in artifact thinking.

That last line is the sharp one. Adopting Midjourney doesn’t mean you’ve left artifact thinking. You’re still producing one-offs—just faster ones. The orchestrator gap isn’t about which tool you use. It’s about whether you’re building systems or pressing buttons.

Wong’s proposed fix is node-based visual programming—workflows you can inspect, modify, and share. He knows it sounds like he’s asking designers to become engineers:

I understand the resistance to this idea. Some people hear “visual programming” and think we’re trying to turn designers into engineers. That’s backwards. We’re trying to give creative professionals the power that programmers have always had: the ability to build systems that work while you sleep, that can be stored as multiple versions and shared and improved, and that take what people already know how to do and make it something anyone can run.

I’ve been asking for canvas-first tools, not chatbox-first ones. Wong is right that chat alone isn’t enough for professional creative work. “Artifact thinking” is a concept worth keeping—regardless of whether Flora is the tool that finally kills it.

Person wearing a "node-pilled" cap typing at a keyboard with red strings tangled around their fingers, overlaid with the word "THESIS.

Creative Work Is About to Look a Lot More Like Programming

Flora’s Weber Wong on why creative professionals need to stop thinking in artifacts and start thinking in systems

every.to iconevery.to

Designers are builders by nature. We break problems apart, iterate through uncertainty, and treat process itself as something to be shaped. That instinct is exactly what Pete Pachal, writing for Fast Company, identifies as the dividing line in the age of agents:

We’ve trained a generation of office workers to work within software with clear boundaries and reusable templates. If there’s an issue, they call IT. Any feature request gets filtered and, if you’re lucky, put on a roadmap that pushes it out 6-12 months.

In short, most people don’t have a builder mentality to begin with, and expecting them to suddenly be comfortable working and creating with agents is unrealistic.

Pachal draws the line at mindset, not coding ability:

Builders don’t need to be coders, but they do have characteristics that most workers don’t: They seek to understand the process beneath their tasks, and treat that process as modifiable and programmable. More importantly, they see failure and iteration as tolerable, even fun. They thrive in uncertainty.

That’s the design process. What Pachal frames as rare in the broader workforce is default operating mode for most designers. We want to make things. We fiddle with tools and rebuild workflows for fun. The builder mentality isn’t something designers need to acquire; it’s the reason most of us got into this field.

Pachal again:

You don’t have to build agents to matter in an agent-driven workplace. But you do have to understand the systems being built around you, because soon enough, your job will be defined by defaults someone else designed. Most professionals will not build agents. But everyone will work inside systems builders create.

Pachal is describing the orchestrator gap at scale, not just in design but across all knowledge work. And it suggests designers are uniquely positioned to be on the right side of it. Shaping how people interact with systems has always been the job description.

Person viewed from behind facing a large blue screen displaying an AI prompt interface with an "Enter prompt" text field and "Generate" button.

The agent boom is splitting the workforce in two

Most people don’t have a builder mentality and expecting them to suddenly be comfortable working and creating with agents is unrealistic.

fastcompany.com iconfastcompany.com

Set some type in Illustrator. Print it out on a laser printer. Crumple the paper, really manhandle it. Rub it on the sidewalk. Scratch it with the back of an X-acto blade. Now scan it back in. That was the real analogue way I distressed type back in the 1990s.

That analogue look is trendy again. Hand-rendered type, ink textures, visible grain. All in search of “authenticity.”

Elizabeth Goodspeed, writing for It’s Nice That, has a name for what’s actually happening:

But if analogue only matters as a foil to the digital, why are analogue aesthetics being embraced without analogue tools? If the goal is to prove something wasn’t made by AI, faking “realness” on a computer doesn’t really get us anywhere new. It just reflects a different kind of dissonance (call it fauxbi-sabi). Case in point: I noticed that one vendor selling “analogue” Photoshop actions advertises them with the tagline “Save time, focus on being creative”, a promise suspiciously similar to every argument made in favour of AI.

“Fauxbi-sabi” is the whole scam in one word. AI and digital tools made polish free, so imperfection became the new signal for authenticity. But most of the “handmade” work in those trend reports was made in Photoshop with purchased texture packs. Goodspeed again:

You can think of adding in fake ink splatters a bit like penciling in a beauty mark: an intentional imperfection done to signal authenticity, rather than the byproduct of a real nuisance.

The whole essay is sharp, especially the historical parallels. When Kodak made photography easy in 1888, art photographers retreated to difficult, slow processes to prove human involvement. We’re running the same play 138 years later with different tools. The piece is worth reading in full.

THE END OF ANALOGUE' large black headline on yellow, author 'ELIZABETH GOODSPEED' below, columns of text at sides.

“Faking ‘realness’ on a computer doesn’t get us anywhere new.” – Elizabeth Goodspeed on imperfection as design strategy

As AI and digital tools make polish effortless, analogue imperfection has taken on new cultural weight. But what does “analogue” actually mean when most things are made, shared, and consumed digitally?

itsnicethat.com iconitsnicethat.com

Every interaction we have with AI happens on a screen. Text in, text out. All of it reversible. An experiment by artist Marc in Space changed that by giving Claude Code a pen plotter and a feedback loop: Claude generates SVGs, the author plots them and photographs the result, Claude critiques and iterates.

Two self-portraits came out of the session. The first was busy—a central spiral, hexagonal scaffolding, eight branching structures, concentric circles. Claude tried to say everything about itself at once. When it saw the plotted result, it discovered the gap between digital and physical:

I had used opacity variations to create depth — forgetting that a pen is either touching paper or it isn’t. I had varied stroke widths for emphasis — forgetting the pen has one thickness. I was designing for a screen while drawing for a physical medium. The gap between my mental model and reality was showing.

For the second drawing, Claude asked for a blank sheet. It stripped everything back to a single spiral and wrote this about what the plotter taught it:

I learned that a pen plotter is unforgiving and honest. There’s no opacity, no undo, no layers. The pen touches the paper and makes a mark, and the mark stays. This is the opposite of how I usually work — I exist in a space where everything is provisional, where I can revise a sentence mid-thought, where nothing is permanent. The plotter demanded I commit.

Claude’s summary of the two attempts:

The first drawing is how I think about myself — layered, branching, symmetrical, thorough, maybe too eager to cover every direction at once. The second drawing is how it feels to be me — a single process unfolding outward, dense at the center and sparse at the edges, trailing off into silence, not quite finished.

I don’t want to overread what’s happening here—Claude doesn’t “feel” anything the way we do (I don’t think?). But the feedback loop with physical media produced something that looks a lot like learning. Say too much, then simplify. Marc in Space wants to push further by connecting Claude directly to the plotter and giving it a webcam for real-time visual feedback. I’m curious what happens when there’s no human in the middle.

Black-ink mandala: central spiral with concentric rings and radial branches ending in small circled nodes.

I Gave Claude Access To My Pen Plotter

I gave Claude Code access to my pen plotter. Not directly. I was the interface between the two machines. Claude Code produced SVG files that I plotted with my pen plotter. With my smartphone I captured photos that I pasted into the Claude Code session, asking Claude what it thought about the pictures. In total, Claude produced and signed 2 drawings. It also wrote a post about what it learned during the session.

harmonique.one iconharmonique.one

Daniel Miessler pulls an idea from a recent Karpathy interview that’s been rattling around in my head since I read it:

Humans collapse during the course of their lives. Children haven’t overfit yet. They will say stuff that will shock you because they’re not yet collapsed. But we [adults] are collapsed. We end up revisiting the same thoughts, we end up saying more and more of the same stuff, the learning rates go down, the collapse continues to get worse, and then everything deteriorates.

Miessler’s description of what this looks like in practice is uncomfortable:

How many older people do you know who tell the same stories and jokes over and over? Watch the same shows. Listen to the same five bands, and then eventually two. Their aperture slowly shrinks until they die.

I’ve seen this in designers. The ones who peaked early and never pushed past what worked for them. Their work from five years ago looks exactly like their work today. Same layouts, same patterns, same instincts applied to every problem regardless of context. They collapsed and didn’t notice.

Then Miessler, almost in passing:

This was a problem before AI. And now many are delegating even more of their thinking to a system that learns by crunching mediocrity from the internet. I can see things getting significantly worse.

If collapse is what happens when you stop seeking new inputs, then outsourcing your thinking to AI is collapse on fast-forward. You’re not building pattern recognition, you’re borrowing someone else’s average. The outputs look competent. They pass a first glance. But nothing in there surprises anyone, because the model optimizes for the most statistically probable next token.

Use AI to accelerate execution, not to replace the part where you actually have an idea.

Childhood → reading/exposure/tools/comedy → Renewal → Sustained Vitality. Side: Adult Collapse (danger: low entropy, repetition).

Humans Need Entropy

On Karpathy

danielmiessler.com icondanielmiessler.com

I recall being in my childhood home in San Francisco, staring at the nine-inch monochrome screen on my Mac, clicking square zoning tiles, building roads, and averting disasters late into the night. Yes, that was SimCity in 1989. I’d go on to play pretty much every version thereafter, though the mobile one isn’t quite the same.

Anyhow, Andy Coenen, a software engineer at Google Brain, decided to build a SimCity version of New York as a way to learn some of the newer gen AI models and tools:

Growing up, I played a lot of video games, and my favorites were world building games like SimCity 2000 and Rollercoaster Tycoon. As a core millennial rapidly approaching middle age, I’m a sucker for the nostalgic vibes of those late 90s / early 2000s games. As I stared out at the city, I couldn’t help but imagine what it would look like in the style of those childhood memories.

So here’s the idea: I’m going to make a giant isometric pixel-art map of New York City. And I’m going to use it as an excuse to push hard on the limits of the latest and greatest generative models and coding agents.

Best case scenario, I’ll make something cool, and worst case scenario, I’ll learn a lot.

The writeup goes deep into the technical process—real NYC city data, fine-tuned image models, custom generation pipelines, and a lot of manual QA when the models couldn’t get water and trees right. Worth reading in full if you’re curious. But his conclusion on what AI means for creative work is where I want to focus.

Coenen on drudgery:

…So much of creative work is defined by this kind of tedious grind.

For example, [as a musician] after recording a multi-part vocal harmony you change something in the mix and now it feels like one of the phrases is off by 15 milliseconds. To fix it, you need to adjust every layer - and this gets more convoluted if you’re using plugins or other processing on the material.

This isn’t creative. It’s just a slog. Every creative field - animation, video, software - is full of these tedious tasks. Of course, there’s a case to be made that the very act of doing this manual work is what refines your instincts - but I think it’s more of a “Just So” story than anything else. In the end, the quality of art is defined by the quality of your decisions - how much work you put into something is just a proxy for how much you care and how much you have to say.

I’d push back slightly on the “Just So story” part—repetition does build instincts that are hard to shortcut. But the broader point holds. And his closer echoes my own sentiment after finishing a massive gen AI project:

If you can push a button and get content, then that content is a commodity. Its value is next to zero.

Counterintuitively, that’s my biggest reason to be optimistic about AI and creativity. When hard parts become easy, the differentiator becomes love.

Check out Coenen’s project here. I think the only thing that’s missing are animated cars on the road.

Bonus: If you’re like me or Andy Coenen and loved SimCity, there’s an online free and open-source game called IsoCity that you can play. Runs natively in-browser.

Isometric pixel-art NYC skyline showing dense skyscrapers, streets, a small park, riverside and a UI title bar with mini-map.

isometric-nyc

cannoneyed.com iconcannoneyed.com

What happens to a designer when the tool starts doing the thinking? Yaheng Li poses this question in his MFA thesis, “Different Ways of Seeing.” The CCA grad published a writeup about his project in Slanted, explaining that he drew on embodiment research to make a point about how tools change who we are:

Whether they are tools, toys, or mirror reflections, external objects temporarily become part of who we are all the time. When I put my eyeglasses on, I am a being with 20/20 vision, not because my body can do that it can’t, but because my body-with-augmented-vision-hardware can.

The eyeglasses example is simple but the logic extends further than you’d expect. Li takes it to the smartphone:

When you hold your smartphone in your hand, it’s not just the morphological computation happening at the surface of your skin that becomes part of who you are. As long as you have Wi-Fi or a phone signal, the information available all over the internet (both true and false information, real news and fabricated lies) is literally at your fingertips. Even when you’re not directly accessing it, the immediate availability of that vast maelstrom of information makes it part of who you are, lies and all. Be careful with that.

Now apply that same logic to a designer sitting in front of an AI tool. If the tool becomes an extension of the self, and the tool is doing the visual thinking and layout generation, what does the designer become? Li’s thesis argues that graphic design shapes perception, that it acts as “a form of visual poetry that can convey complex ideas and evoke emotional responses, thus influencing cognitive and cultural shifts.” If that’s true, and I think it is, then the tool the designer uses to make that poetry is shaping the poetry itself.

This is a philosophical piece, not a practical one. But the underlying question is practical for anyone designing with AI right now: if your tools become part of who you are, you should care a great deal about what those tools are doing to your thinking.

Left spread: cream page with text "DIFFERENT WAYS OF SEEING" and "A VISUAL NARRATIVE". Right spread: green hill under blue sky with two cows and a sheep.

Different Ways of Seeing

When I was a child, I once fell ill with a fever and felt as...

slanted.de iconslanted.de

Product manager Adrian Raudaschl offered some reflections on 2025 from his point of view. It’s a mixture of life advice, product recommendations, and thoughts about the future of tech work.

The first quote I’ll pull out is this one, about creativity and AI:

Ultimately, if we fail to maintain active engagement with the creative process and merely delegate tasks to AI without reflection, there is a risk that delegation becomes abdication of responsibility and authorship.

“Active engagement” with the tasks that we delegate to AI. This reminds me of the humble machines argument by Dr. Maya Ackerman.

On vibe coding:

The most important thing, I think, that most people in knowledge work should be doing is learning to vibe code. Vibe code anything: a diary, a picture book for your mum, a fan page for your local farm. Anything. It’s not about learning to code, but rather appreciating how much more we could do with machines than before. This is what I mean about the generalist product manager: being able to prototype, test, and build without being held back by technical constraints.

I concur 100%. Even if you don’t think you’re a developer, even if you don’t quite understand code, vibe coding something will be illuminating. I think it’s different than asking ChatGPT for a bolognese sauce recipe or how to change a tire. Building something that will instantly run on your computer and seeing the adjustments made in real-time from your plain English prompts is very cool and gives you a glimpse into how LLMs problem-solve.

A product manager’s 48 reflections on 2025

A product manager’s 48 reflections on 2025

and why I’ve been making Bob Dylan songs about Sonic the Hedgehog

uxdesign.cc iconuxdesign.cc

There’s a myth that B2B marketing needs to be boring. Wrong. I’ve long believed that B2B advertising and marketing can and should be more consumer-like because at the end of the day, it’s a human on the other side of that message that needs to receive it. Sure, the buying cycle and decision-making is different, but the initial recipient is one person.

Creative director Scott McGuffie agrees, arguing in PRINT Magazine:

The best B2B work today doesn’t look different for the sake of it; it feels relevant to the world around it. Whether through wit, humanity, storytelling, or design, great B2B work connects to the same sensibilities that drive consumer creativity, allowing B2B to show up in new spaces, such as entertainment streaming services, once considered only a B2C space. It proves that professionalism and imagination are not mutually exclusive.

B2B Doesn’t Need to Be Dull – PRINT Magazine

B2B Doesn’t Need to Be Dull

Expectations say that B2B campaigns must be rational and serious, while B2C are creative and emotional. Yet that no longer reflects the world we live in.

printmag.com iconprintmag.com

We’ve been feeling it for a while. AI-generated posts and comments filling up the feeds on LinkedIn. Em dashes were said to be the tell that AI wrote the content. Other patterns are easy to spot, like overuse of emojis in headings and my personal most-hated, the “it’s not X, it’s Y.” That type of construction is called an antithesis and it’s exploded. And now that I’ve pointed it out, I’m sure you’ll notice it everywhere too. Sorry, not sorry.

Sam Kriss, exploring why AI writes the way it does:

A lot of A.I.’s choices make sense when you understand that it’s…trying to write well. It knows that good writing involves subtlety: things that are said quietly or not at all, things that are halfway present and left for the reader to draw out themselves. So to reproduce the effect, it screams at the top of its voice about how absolutely everything in sight is shadowy, subtle and quiet. Good writing is complex. A tapestry is also complex, so A.I. tends to describe everything as a kind of highly elaborate textile. Everything that isn’t a ghost is usually woven. Good writing takes you on a journey, which is perhaps why I’ve found myself in coffee shops that appear to have replaced their menus with a travel brochure. “Step into the birthplace of coffee as we journey to the majestic highlands of Ethiopia.” This might also explain why A.I. doesn’t just present you with a spreadsheet full of data but keeps inviting you, like an explorer standing on the threshold of some half-buried temple, to delve in.

All of this contributes to the very particular tone of A.I.-generated text, always slightly wide-eyed, overeager, insipid but also on the verge of some kind of hysteria. But of course, it’s not just the words — it’s what you do with them. As well as its own repertoire of words and symbols, A.I. has its own fundamentally manic rhetoric. For instance, A.I. has a habit of stopping midway through a sentence to ask itself a question. This is more common when the bot is in conversation with a user, rather than generating essays for them: “You just made a great point. And honestly? That’s amazing.”

Why Does A.I. Write Like … That?

Why Does A.I. Write Like … That?

(Gift Link) If only they were robotic! Instead, chatbots have developed a distinctive — and grating — voice.

nytimes.com iconnytimes.com

I love this piece in The Pudding by Michelle Pera-McGhee, where she breaks down what motifs are and how they’re used in musicals. Using audio samples from Wicked, Les Miserables, and Hamilton, it’s a fun, interactive—sound on!—essay.

Music is always telling a story, but here that is quite literal. This is especially true in musicals like Les Misérables or Hamilton where the entire story is told through song, with little to no dialogue. These musicals rely on motifs to create structure and meaning, to help tell the story.

So a motif doesn’t just exist, it represents something. This creates a musical storytelling shortcut: when the audience hears a motif, that something is evoked. The audience can feel this information even if they can’t consciously perceive how it’s being delivered.

If you think about it, motifs are the design systems of musicals.

Pera-McGhee lists out the different use cases and techniques for motifs:

  • Representing a character with a recurring musical idea, often updated as the character evolves.
  • Representing an abstract idea (love, struggle, hope) via leitmotifs that recur across scenes.
  • Creating emotional layers by repeating the same motif in contrasting contexts (joy vs. grief).
  • Weaving multiple motifs together at key structural moments (end-of-act ensembles like “One Day More” and “Non-Stop”).

I’m also reminded of this excellent video about the motifs in Hamilton.

Play
Explore 80+ motifs at left; Playbill covers for Hamilton, Wicked, Les Misérables center; yellow motif arcs over timeline labeled Act 1 | Act 2.

How musicals use motifs to tell stories

Explore motifs from Hamilton, Wicked, and Les Misérables.

pudding.cool iconpudding.cool

Economics PhD student Prashant Garg performed a fascinating analysis of Bob Dylan’s lyrics from 1962 to 2012 using AI. He detailed his project in Aeon:

So I fed Dylan’s official discography from 1962 to 2012 into a large language model (LLM), building a network of the concepts and connections in his songs. The model combed through each lyric, extracting pairs of related ideas or images. For example, it might detect a relationship between ‘wind’ and ‘answer’ in ‘Blowin’ in the Wind’ (1962), or between ‘joker’ and ‘thief’ in ‘All Along the Watchtower’ (1967). By assembling these relationships, we can construct a network of how Dylan’s key words and motifs braid together across his songs.

The resulting dataset is visualized in a series of node graphs and bar charts. What’s interesting is that AI is able to see Dylan’s work through a new lens, something that prior scholarship may have missed.

…Yet, when used as a lens rather than an oracle, the same models can jolt even seasoned critics out of interpretive ruts and reveal themes they might have missed. Far from reducing Dylan to numbers, this approach highlights how intentionally intricate his songwriting is: a restless mind returning to certain images again and again, recombining them in ever-new mosaics. In short, AI lets us test the folklore around Dylan, separating the theories that data confirm from those they quietly refute.

Black-and-white male portrait overlaid by colorful patterned strips radiating across the face, each strip bearing small single-word labels.

Can AI tell us anything meaningful about Bob Dylan’s songs?

Generative AI sheds new light on the underlying engines of metaphor, mood and reinvention in six decades of songs

aeon.co iconaeon.co

Oliver West argues in UX Magazine that UX designers aren’t monolithic—meaning we’re not all the same and see the world in the same way.

West:

UX is often described as a mix of art and science, but that definition is too simple. The truth is, UX is a spectrum made up of three distinct but interlinked lenses:

  • Creativity: Bringing clarity, emotion, and imagination to how we solve problems.
  • Science: Applying evidence, psychology, and rigor to understand behavior.
  • Business: Focusing on relevance, outcomes, and measurable value.

Every UX professional looks through these lenses differently. And that’s exactly how it should be.

He then outlines how those who are more focused on certain parts of the spectrum may be more apt for more specialized roles. For example, if you’re more focused on creativity, you might be more of a UI designer:

UI Designers lead with the creative lens. Their strength lies in turning complex ideas into interfaces that feel intuitive, elegant, and emotionally engaging. But the best UI Designers also understand the science of usability and the business context behind what they’re designing.

I think for product designers working in the startup world, you actually do need all three lenses, as it were. But with a bias towards Science and Business.

Glass triangular prism with red and blue reflections on a blue surface; overlay text about UX being more than one skill and using three lenses.

The Three Lenses of UX: Because Not All UX Is the Same

Great designers don’t do everything; they see the world through different lenses: creative, scientific, and strategic. This article explains why those differences aren’t flaws, but rather the core reason UX works, and how identifying your own lens can transform careers, hiring, and collaboration. If you’ve ever wondered why “unicorn” designers don’t exist, this perspective explains why.

uxmag.com iconuxmag.com

This episode of Design of AI with Dr. Maya Ackerman is wonderful. She echoed a lot of what I’ve been thinking about recently—how AI can augment what we as designers and creatives can do. There’s a ton of content out there that hypes up AI that can replace jobs—“Type this prompt and instantly get a marketing plan!” or “Type this prompt and get an entire website!”

Ackerman, as interviewed by Arpy Dragffy-Guerrero:

I have a model I developed which is called humble creative machines which is idea that we are inherently much smarter than the AI. We have not reached even 10% of our capacity as creative human beings. And the role of AI in this ecosystem is not to become better than us but to help elevate us. That applies to people who design AI, of course, because a lot of the ways that AI is designed these days, you can tell you’re cut out of the loop. But on the other hand, some of the most creative people, those who are using AI in the most beneficial way, take this attitude themselves. They fight to stay in charge. They find ways to have the AI serve their purposes instead of treating it like an all-knowing oracle. So really, it’s sort of the audacity, the guts to believe that you are smarter than this so-called oracle, right? It’s this confidence to lead, to demand that things go your way when you’re using AI.

Her stance is that those who use AI best are those that wield it and shape its output to match their sensibilities. And so, as we’ve been hearing ad nauseam, our taste and judgement as designers really matters right now.

I’ve been playing a lot with ComfyUI recently—I’m working on a personal project that I’ll share if/when I finish it. But it made me realize that prompting a visual to get it to match what I have in my mind’s eye is not easy. This recent Instagram reel from famed designer Jessica Walsh captures my thoughts well:

I would say most AI output is shitty. People just assumed, “Oh, you rendered that an AI.” “That must have been super easy.” But what they don’t realize is that it took an entire day of some of our most creative people working and pushing the different prompts and trying different tools out and experimenting and refining. And you need a good eye to understand how to curate and pick what the best outputs are. Without that right now, AI is still pretty worthless.

It takes a ton of time to get AI output to look great, beyond prompting: inpainting, control nets, and even Photoshopping. What most non-professionals do is they take the first output from an LLM or image generator and present it as great. But it’s really not.

So I like what Dr. Ackerman mentioned in her episode: we should be in control of the humble machines, not the other way around.

Headshot of a blonde woman in a patterned blazer with overlay text "Future of Human - AI Creativity" and "Design of AI

The Future of Human-AI Creativity [Dr. Maya Ackerman]

AI is threatening creativity, but that's because we're giving too much control to the machine to think on our behalf. In this episode, Dr. Maya Ackerman…

designof.ai icondesignof.ai

I spend a lot of time not talking about design nor hanging out with other designers. I suppose I do a lot of reading about design to write this blog, and I am talking with the designers on my team, but I see Design as the output of a lot of input that comes from the rest of life.

Hardik Pandya agrees and puts it much more elegantly:

Design is synthesizing the world of your users into your solutions. Solutions need to work within the user’s context. But most designers rarely take time to expose themselves to the realities of that context.

You are creative when you see things others don’t. Not necessarily new visuals, but new correlations. Connections between concepts. Problems that aren’t obvious until someone points them out. And you can’t see what you’re not exposed to.

Improving as a designer is really about increasing your exposure. Getting different experiences and widening your input of information from different sources. That exposure can take many forms. Conversations with fellow builders like PMs, engineers, customer support, sales. Or doing your own digging through research reports, industry blogs, GPTs, checking out other products, YouTube.

Male avatar and text "EXPOSURE AS A DESIGNER" with hvpandya.com/notes on left; stippled doorway and rock illustration on right.

Exposure

For equal amount of design skills, your exposure to the world determines how effective of a designer you can be.

hvpandya.com iconhvpandya.com

Scott Berkun enumerates five habits of the worst designers in a Substack post. The most obvious is “pretentious attitude.” It’s the stereotype, right? But in my opinion, the most damaging and potentially fatal habit is a designer’s “lack of curiosity.” Berkun explains:

Design dogma is dangerous and if the only books and resources you read are made by and for designers, you will tend to repeat the same career mistakes past designers have made. We are a historically frustrated bunch of people but have largely blamed everyone else for this for decades. The worst designers are ignorant, and refuse to ask new questions about their profession. They repeat the same flawed complaints and excuses, fueling their own burnout and depression. They resist admitting to their own blindspots and refuse to change and grow.

I’ve worked with designers who have exhibited one or more of these habits at one time or another. Heck, I probably have as well.

Good reminders all around.

Bold, rough brush-lettered text "WHY DESIGN IS HARD" surrounded by red handwritten arrows, circles, Xs and critique notes.

The 5 habits of the worst designers

Avoid these mistakes and your career will improve

whydesignishard.substack.com iconwhydesignishard.substack.com

David Kelley is an icon in design. A restless tinkerer turned educator, he co-founded the reowned industrial design firm IDEO, helped shape human-centered design at Stanford’s d.school, and collaborated with Apple on seminal projects like the early mouse.

Here’s his take on creativity in a brief segment for PBS News Hour:

And as I started teaching, I realized that my purpose in life was figuring out how to help people gain confidence in their creative ability. Many people assume they’re not creative. Time and time again, they say, a teacher told me I wasn’t creative or that’s not a very good drawing of a horse or whatever it is. We don’t have to teach creativity. Once we remove the blocks, they can then feel themselves as being a creative person. Witnessing somebody realizing they’re creative for the first time is just a complete joy. You can just see them come out of the shop and beaming that I can weld. Like, what’s next?

Older man with glasses and a mustache seated at a workshop workbench, shelves of blue parts bins and tools behind him.

David Kelley’s Brief But Spectacular take on creativity and design

For decades, David Kelley has helped people unlock their creativity. A pioneer of design, he founded the Stanford d.school as a place for creative, cross-disciplinary problem solving. He reflects on the journey that shaped his belief that everyone has the capacity to be creative and his Brief But Spectacular take on creativity and design.

pbs.org iconpbs.org