Skip to content

82 posts tagged with “process”

When I managed over 40 creatives at a digital agency, the hardest part wasn’t the work itself—it was resource allocation. Who’s got bandwidth? Who’s blocked waiting on feedback? Who’s deep in something and shouldn’t be interrupted? You learn to think of your team not as individuals you assign tasks to, but as capacity you orchestrate.

I was reminded of that when I read about Boris Cherny’s approach to Claude Code. Cherny is a Staff Engineer at Anthropic who helped build Claude Code. Karo Zieminski, writing in her Product with Attitude Substack, breaks down how Cherny actually uses his own tool:

He keeps ~10–15 concurrent Claude Code sessions alive: 5 in terminal (tabbed, numbered, with OS notifications). 5–10 in the browser. Plus mobile sessions he starts in the morning and checks in on later. He hands off sessions between environments and sometimes teleports them back and forth.

Zieminski’s analysis is sharp:

Boris doesn’t see AI as a tool you use, but as a capacity you schedule. He’s distributing cognition like compute: allocate it, queue it, keep it hot, switch contexts only when value is ready. The bottleneck isn’t generation; it’s attention allocation.

Most people treat AI assistants like a single very smart coworker. You give it a task, wait for the answer, evaluate, iterate. Cherny treats Claude like a team—multiple parallel workers, each holding different context, each making progress while he’s focused elsewhere.

Zieminski again:

Each session is a separate worker with its own context, not a single assistant that must hold everything. The “fleet” approach is basically: don’t make one brain do all jobs; run many partial brains.

I’ve been using Claude Code for months, but mostly one session at a time. Reading this, I realize I’ve been thinking too small. The parallel session model is about working efficiently. Start a research task in one session, let it run while you code in another, check back when it’s ready.

Looks like the new skill on the block is orchestration.

Cartoon avatar in an orange cap beside text "I'm Boris and I created Claude Code." with "6.4M Views" in a sketched box.

How Boris Cherny Uses Claude Code

An in-depth analysis of how Boris Cherny, creator of Claude Code, uses it — and what it reveals about AI agents, responsibility, and product thinking.

open.substack.com iconopen.substack.com

If design’s value isn’t execution—and AI is making that argument harder to resist—then what is it? Dan Ramsden offers a framework I find useful.

He breaks thinking into three types: deduction (drawing conclusions from data), induction (building predictions from patterns), and abduction—generating something new. Design’s unique contribution is abductive thinking:

When we use deduction, we discover users dropping off during a registration flow. Induction might tell us why. Abduction would help us imagine new flows to fix it.

Product managers excel at sense-making (aka “Why?”). Engineers build the thing. Design makes the difference—moving from “what is” to “what could be.”

On AI and the temptation to retreat to “creativity” or “taste” as design’s moat, Ramsden is skeptical:

Some might argue that it comes down to “taste”. I don’t think that’s quite right — taste without a rationale is just an opinion. I think designers are describers.

I appreciate that distinction. Taste without rationale is just preference. Design’s value is translating ideas through increasing levels of fidelity—from sketch to prototype to tested solution—validating along the way.

His definition of design in a product context:

Design is a set of structured processes to translate intent into experiments.

That’s a working definition I can use. It positions design not as the source of ideas (those can come from anywhere, including AI), but as the discipline that manages ideas through validation. The value isn’t in generating the concept—it’s in making it real while managing risk.

Two overlapping blue circles: left text "Making sense to generate a problem"; right text "Making a difference to generate value

The value of Design in a product organisation

Clickbait opening: There’s no such thing as Product Design

medium.com iconmedium.com

I’ve spent a lot of my product design career pushing for metrics—proving ROI, showing impact, making the case for design in business terms. But I’ve also seen how metrics become the goal rather than a signal pointing toward the goal. When the number goes up, we celebrate. When it doesn’t, we tweak the collection process. Meanwhile, the user becomes secondary. Last week’s big idea was around metrics, this piece piles on.

Pavel Samsonov calls this out:

Managers can only justify their place in value chains by inventing metrics for those they manage to make it look like they are managing.

I’ve sat in meetings where we debated which numbers to report to leadership—not which work to prioritize for users. The metrics become theater. So-called “vanity metrics” that always go up and to the right.

But here’s where Pavel goes somewhere unexpected. He doesn’t let designers off the hook either:

Defining success by a metric of beauty offers a useful kind of vagueness, one that NDS seems to hide behind despite the slow loading times or unnavigability that seem to define their output; you can argue with slow loading times or difficulty finding a form, but you cannot meaningfully argue with “beautiful.”

“Taste” and “beauty” are just another avoidance strategy. That’s a direct challenge to the design discourse that’s been dominant lately—the return to craft, the elevation of aesthetic judgment. Pavel’s saying it’s the same disease, different symptom. Both metrics obsession and taste obsession are ways to avoid the ambiguity of actually defining user success.

So what’s the alternative? Pavel again:

Fundamentally, the work of design is intentionally improving conditions under uncertainty. The process necessarily involves a lot of arguments over the definition and parameters of “improvement”, but the primary barrier to better is definitely not how long it takes to make artifacts.

The work is the argument. The work is facing the ambiguity rather than hiding behind numbers or aesthetics. Neither Figma velocity nor visual polish is a substitute for the uncomfortable conversation about what “better” actually means for the people using your product.

Bold "Product Picnic" text over a black-and-white rolling hill and cloudy sky, with a large outlined "50" on the right.

Your metrics are an avoidance strategy

Being able to quantify outcomes doesn’t make them meaningful. Moving past artificial metrics requires building shared intention with colleagues.

productpicnic.beehiiv.com iconproductpicnic.beehiiv.com

It’s January and by now millions of us have made resolutions and probably broken them already. The second Friday of January is known as “Quitter’s Day.”

OKRs—objectives and key results—are a method for businesses to set and align company goals. The objective is your goal and the KRs are the ways to reach your goals. Venture capitalist John Doerr learned about OKRs while working at Intel, brought it to Google, and later became the framework’s leading evangelist.

Christina Wodtke talks about how to use OKRs for your personal life, and maybe as a way to come up with better New Year’s resolutions. She looked at her past three years of personal OKRs:

Looking at the pattern laid out in front of me, I finally saw what I’d been missing. My problem wasn’t work-life balance. My problem was that I didn’t like the kind of work I was doing.

The key results kept failing because the objective was wrong. It wasn’t about balance. It was about joy.

This is the second thing key results do for you: when they consistently fail, they’re telling you something. Not that you lack discipline—that you might be chasing the wrong goal entirely.

And I love Wodtke’s line here: “New Year’s resolutions fail because they’re wishes, not plans.“ She continues:

They fail because “eat better” and “be healthier” and “find balance” are too vague to act on and too fuzzy to measure.

Key results fix this. Not because measurement is magic, but because the act of measuring forces clarity. It makes you confront what you actually want. And sometimes, when the data piles up, it reveals that what you wanted wasn’t the thing you needed at all.

Your Resolution Isn’t the Problem. Your Measurement Is.

Your Resolution Isn’t the Problem. Your Measurement Is.

It’s January, and millions of people have made the same resolution: “Eat better.” By February, most will have abandoned it. Not because they lack willpower or discipline. Because …

eleganthack.com iconeleganthack.com

Building on our earlier link about measuring the impact of features, how can we keep track of the overall health of the product? That’s where a North Star Metric comes in.

Julia Sholtz writes and introduction to North Star Metrics in the analytics provider Amplitude’s blog:

Your North Star Metric should be the key measure of success for your company’s product team. It defines the relationship between the customer problems your product team is trying to solve and the revenue you aim to generate by doing so.

How is it done? The first step is to figure out the “game” your business is playing: how your business engages with customers:

  1. The Attention Game: How much time are your customers willing to spend in your product?
  2. The Transaction Game: How many transactions does your user make on your platform?
  3. The Productivity Game: How efficiently and effectively can someone get their work done in your product?

They have a whole resource section on this topic that’s worth exploring.

Every Product Needs a North Star Metric: Here’s How to Find Yours

Every Product Needs a North Star Metric: Here’s How to Find Yours

Get an introduction to product strategy with examples of North Star Metrics across industries.

amplitude.com iconamplitude.com

How do we know what we designed is working as intended? We measure. Vitaly Friedman shares something called the TARS framework to measure the impact of features.

We need UX metrics to understand and improve user experience. What I love most about TARS is that it’s a neat way to connect customers’ usage and customers’ experience with relevant product metrics.

Here’s TARS in a nutshell:

  • Target Audience (%): Measures the percentage of all product users who have the specific problem that a feature aims to solve.
  • Adoption (%): Tracks the percentage of the target audience that successfully and meaningfully engages with the feature.
  • Retention (%): Assesses how many users who adopted the feature continue to use it repeatedly over time.
  • Satisfaction Score (CES): Gauges the level of satisfaction, specifically how easy it was for retained users to solve their problem after using the feature.

Friedman has more details in the article, including how to use TARS to measure how well a feature is performing for your intended target audience.

How To Measure The Impact Of Features — Smashing Magazine

How To Measure The Impact Of Features

Meet TARS — a simple, repeatable, and meaningful UX metric designed specifically to track the performance of product features. Upcoming part of the Measure UX & Design Impact (use the code 🎟 IMPACT to save 20% off today).

smashingmagazine.com iconsmashingmagazine.com

Here is a good reminder from B. Prendergast to “stop asking users what they want—and start watching what they do.”

Asking people what they want is one of the most natural instincts in product work. Surveys, interviews, and feature wish lists feel accessible, social, and collaborative. They open channels to understand and empathise with the user base. They help teams feel closer to the people they serve. For teams under pressure, a stack of opinions can feel like solid data.

But this breaks when we compare what users say to what they actually do (say-do gap).

We all want to present ourselves a certain way. We want to seem more competent than confused (social desirability bias). Our memories can be fuzzy, especially about routine tasks (recall bias). Standards for what feels “easy” or “intuitive” can vary wildly between people (reference bias).

And of course, as soon as we start to ask users to imagine what they’d want, they’ll solve based on their personal experiences—which might be the right solution for them, but might not be for other users in the same situation.

Prendergast goes on to suggest “watch what people do, measure what matters, and use what they say to add context.” This approach involves watching user interactions, analyzing real behaviors through analytics, and treating feature requests as signals of underlying problems to uncover genuine needs. Prioritizing decisions based on observed patterns and desired outcomes leads to more effective solutions than relying on user opinions alone.

Stop asking users what they want — and start watching what they do. - Annotated

Stop asking users what they want — and start watching what they do.

People’s opinions about themselves and the things they use rarely match real behaviour.

renderghost.leaflet.pub iconrenderghost.leaflet.pub

Echoing my series on the design talent crisis and other articles warning against the practice of cutting back on junior talent, Vaughn Tan offers yet another dimension: subjective decision-making skills are only honed through practice. But the opportunities given to junior staff for this type of decision-making are slim.

But to back up, here’s Tan explaining what subjective decision-making is:

These are decisions where there’s no one “correct” answer and the answers that work can’t be known in advance. Subjective decisionmaking requires critical thinking skills to make strongly reasoned arguments, identify appropriate evidence, understand the tradeoffs of different arguments, make decisions that may (or may not) be correct, and develop compelling narratives for those decisions.

While his article isn’t about AI nor is it about companies not hiring juniors, it is about companies not developing juniors and allowing them to practice this type of decision-making in low-stakes situations.

Critical thinking and judgment require practice. Practice needs to be frequent, and needs to begin at a low level with very few consequences that are important. This small-scale training in subjective decisionmaking and critical thinking is the best way to learn how to do it properly in more consequential situations.

If you wait until someone is senior to teach judgment, their first practice attempts have serious consequences. High-stakes decisionmaking pressure cannot be simulated realistically; learning how to deal with it requires actual practice with real consequences that progressively increases in scope and consequentiality.

And why is this all important? Not developing junior staff means there will be a bottleneck issue—only seniors can make these judgement calls—and one day, there will be a succession problem, i.e., who takes over when the seniors leave or retire.

Judgment from the ground up

tl;dr: Critical thinking is foundational for making decisions that require subjective judgment. People learn how to do subjective decisionmaking

vaughntan.org iconvaughntan.org

This short article could easily fall under the motivational category, but I couldn’t help but draw parallels to what we do as designers when working as part of a product team.

Hazel Weakly says that people who see systems also tend to become in charge of them, sooner or later. And to be a leader is to “understand that you’ll find yourself stranded in the middle of the ocean one day.”

Not just you, but everyone you lead. And you’ll need to chart a course. In the ever-changing winds, the ever-shifting tides, the unknown weather, and with an inability to see up or down or basically anywhere except a few minutes away. You won’t have the time to find your bearings even if you could. Yet, somehow, in this sea of swirling and infinite complexities and probabilities, in the midst of incalculable odds, you will find yourself needing to have simultaneously several different things…

The first thing is that you will fail. I equate this to knowing that design is about trial and error, testing and measuring, and then adjusting.

The second thing is unshakable conviction that “you will succeed.” I see success as solving the problem, coming up with a solution that helps users do what they need. And you know what? Designers will succeed when they follow the design process.

Finally, the third thing is to “prepare and make ready everyone around you.” Which means influencing your product and engineering counterparts and other stakeholders that the solution your advocating for is the right one.

To Be a Leader of Systems | Hazel Weakly

To Be a Leader of Systems

Picture with me, if you will, the absurdity of finding yourself swimming in the middle of the ocean. First think about the ocean and how deep and infinitely…

hazelweakly.me iconhazelweakly.me
Storyboard grid showing a young man and family: kitchen, driving, airplane, supermarket, night house, grill, dinner.

Directing AI: How I Made an Animated Holiday Short

My first taste of generating art with AI was back in 2021 with Wombo Dream. I even used it to create very trippy illustrations for a series I wrote on getting a job as a product designer. To be sure, the generations were weird, if not even ugly. But it was my first test of getting an image by typing in some words. Both Stable Diffusion and Midjourney gained traction the following year and I tried both as well. The results were never great or satisfactory. Years upon years of being an art director had made me very, very picky—or put another way, I had developed taste.

I didn’t touch generative AI art again until I saw a series of photos by Lars Bastholm playing with Midjourney.

Child in yellow jacket smiling while holding a leash to a horned dragon by a park pond in autumn.

Lars Bastholm created this in Midjourney, prompting “What if, in the 1970s, they had a ‘Bring Your Monster’ festival in Central Park?”

That’s when I went back to Midjourney and started to illustrate my original essays with images generated by it, but usually augmented by me in Photoshop.

In the intervening years, generative AI art tools had developed a common set of functionality that was all very new to me: inpainting, style, chaos, seed, and more. Beyond closed systems like Midjourney and OpenAI’s DALL-E, open source models from Stable Diffusion, Flux, and now a plethora of Chinese models offer even better prompt adherence and controllability via even more opaque-sounding functionality like control nets, LoRAs, CFG, and other parameters. It’s funny to me that for a very artistic field, the associated products to enable these creations are very technical.

My Site Stats for 2025

In 2025, I published 328 posts with a total of 118,445 words on this blog. Of course, in most of the posts, I’m quoting others, so excluding block quotes—those quoted passages greater than a sentence—I’m down to 76,226 words. Still pretty impressive, I’d say.

Post analysis 2025 - 328 posts. Top months: Oct 45, Jul 42, Mar 4. Link posts 283 (86%). Total words 118,445, avg 361.

I used Claude Code to write a little script that analyzed my posts from last year.

In reviewing data from my analytics package Umami, it is also interesting which posts received the most views. By far it was “Beyond the Prompt,” my AI prompt-to-code shootout article. The others in the top five were:

That last one has always surprised me. I must’ve hit the Google lottery on it for some reason.

Speaking of links, since April—no data before—visitors clicked on links mentioned on this blog 2,949 times. I also wanted to see which linked items were most popular, by outbound clicks:

  1. AI 2027, naturally
  2. Smith & Diction’s catalog of “Usable Google Fonts
  3. Matt Webb’s post on Do What I Mean
  4. A visualization called “The Authoritarian Stack” that shows how power, money, and companies connect
  5. The New York Times list of the “25 Most Influential Magazine Covers of All Time” (sadly, the gift link has since expired)

And finally, the totals of the year for views were 58,187, with 42,075 visitors. That works out to be an average of about 3,500 visitors per month. Tiny compared with other blogs out there. But my readers mean the world to me.

Anyway, some interesting stats, at least to me. Here’s to more in 2026.

I spend a lot of time not talking about design nor hanging out with other designers. I suppose I do a lot of reading about design to write this blog, and I am talking with the designers on my team, but I see Design as the output of a lot of input that comes from the rest of life.

Hardik Pandya agrees and puts it much more elegantly:

Design is synthesizing the world of your users into your solutions. Solutions need to work within the user’s context. But most designers rarely take time to expose themselves to the realities of that context.

You are creative when you see things others don’t. Not necessarily new visuals, but new correlations. Connections between concepts. Problems that aren’t obvious until someone points them out. And you can’t see what you’re not exposed to.

Improving as a designer is really about increasing your exposure. Getting different experiences and widening your input of information from different sources. That exposure can take many forms. Conversations with fellow builders like PMs, engineers, customer support, sales. Or doing your own digging through research reports, industry blogs, GPTs, checking out other products, YouTube.

Male avatar and text "EXPOSURE AS A DESIGNER" with hvpandya.com/notes on left; stippled doorway and rock illustration on right.

Exposure

For equal amount of design skills, your exposure to the world determines how effective of a designer you can be.

hvpandya.com iconhvpandya.com

Scott Berkun enumerates five habits of the worst designers in a Substack post. The most obvious is “pretentious attitude.” It’s the stereotype, right? But in my opinion, the most damaging and potentially fatal habit is a designer’s “lack of curiosity.” Berkun explains:

Design dogma is dangerous and if the only books and resources you read are made by and for designers, you will tend to repeat the same career mistakes past designers have made. We are a historically frustrated bunch of people but have largely blamed everyone else for this for decades. The worst designers are ignorant, and refuse to ask new questions about their profession. They repeat the same flawed complaints and excuses, fueling their own burnout and depression. They resist admitting to their own blindspots and refuse to change and grow.

I’ve worked with designers who have exhibited one or more of these habits at one time or another. Heck, I probably have as well.

Good reminders all around.

Bold, rough brush-lettered text "WHY DESIGN IS HARD" surrounded by red handwritten arrows, circles, Xs and critique notes.

The 5 habits of the worst designers

Avoid these mistakes and your career will improve

whydesignishard.substack.com iconwhydesignishard.substack.com

Critiques are the lifeblood of design. Anyone who went to design school has participated in and has been the focus of a crit. It’s “the intentional application of adversarial thought to something that isn’t finished yet,” as Fabricio Teixeira and Caio Braga, the editors of DOC put it.

A lot of solo designers—whether they’re a design team of one or if they’re a freelancer—don’t have the luxury of critiques. In my view, they’re handicapped. There are workarounds, of course. Such as critiques with cross-functional peers, but it’s not the same. I had one designer on my team—who used to be a design team of one in her previous company—come up to me and say she’s learned more in a month than a year at her former job.

Further down, Teixeira and Braga say:

In the age of AI, the human critique session becomes even more important. LLMs can generate ideas in 5 seconds, but stress-testing them with contextual knowledge, taste, and vision, is something that you should be better at. As AI accelerates the production of “technically correct” and “aesthetically optimized” work, relying on just AI creates the risks of mediocrity. AI is trained to be predictable; crits are all about friction: political, organizational, or strategic.

Critique

Critique

On elevating craft through critical thinking.

doc.cc icondoc.cc

He told me his CEO - who’s never written a line of code - was running their company from an AI code editor.

I almost fell out of my chair.

OF COURSE. WHY HAD I NOT THOUGHT OF THAT.

I’ve since gotten rid of almost all of my productivity tools.

ChatGPT, Notion, Todoist, Airtable, Google Keep, Perplexity, my CRM. All gone.

That’s the lede for a piece by Derek Larson on running everything from Claude Code. I’ve covered how Claude Code is pretty brilliant and there are dozens more use cases than just coding.

But getting rid of everything and using just text files and the terminal window? Seems extreme.

Larson uses a skill in Claude Code called “/weekly” to do a weekly review.

  1. Claude looks at every file change since last week
  2. Claude evaluates the state of projects, tasks, and the roadmap
  3. We have a conversation to dig deeper, and make decisions
  4. Claude generates a document summarizing the week and plan we agreed on

Then Claude finds items he’s missed or procrastinating on, and “creates a space to dump everything” on his mind.

Blue furry Cookie Monster holding two baking sheets filled with chocolate chip cookies.

Feed the Beast

AI Eats Software

dtlarson.com icondtlarson.com

Design Thinking has gotten a bad rap in recent years. It was supposed to change everything in the corporate world but ended up changing very little. While Design Thinking may not be the darling anymore, designers still need time to think, which is, for the sake of argument, time away from Figma and pushing pixels.

Chris Becker argues in UX Collective:

However, the canary in the coalmine is that Designers are not being used for their “thinking” but rather their “repetition”. Much of the consternation we feel in the UX industry is catapulted on us from this point of friction.

He says that agile software development and time for designers to think aren’t incompatible:

But allowing Designers to implement their thinking into the process is about trust. When good software teams collaborate effectively, there are high levels of trust and autonomy (a key requirement of agile teams). Designers must earn that trust, of course, and when we demonstrate that we have “done the thinking,” it builds confidence and garners more thinking time. Thinking begets thinking. So, Designers, let’s continue to work to maximise our “thinking” faculties.

Hand-drawn diagram titled THINKING: sensory icons and eyeballs feed a brain, plus a phone labeled "Illusory Truth Effect," leading to outputs labeled "Habits.

Let designers think

How “Thinking” + “Designing” need to be practiced outside AI.

uxdesign.cc iconuxdesign.cc

Game design is fascinating to me. As designers, “gamification” was all the rage a few years back, inspired by apps like Duolingo that made it fun to progress in a product. Raph Koster outlines a twelve-step, systems-first framework for game design, complete with illustrations. Notice how he’ll use UX terms like “affordance” because ultimately, game design is UX.

In step five, “Feedback,” Koster provides an example:

[The player] can’t learn and get better unless [they] get a whole host of information.

  • You need to know what actions – we usually call them verbs — are even available to you. There’s a gas pedal.
  • You need to be able to tell you used a verb. You hear the engine growl as you press the pedal.
  • You need to see that the use of the verb affected the state of the problem, and how it changed. The spedometer moved!
  • You need to be told if the state of the problem is better for your goal, or worse. Did you mean to go this fast?

Sound familiar? It’s Jakob Nielsen’s “Visibility of System Status.”

White-bordered hex grid with red, blue, yellow and black hex tiles marked by dot patterns, clustered on a dark tabletop

Game design is simple, actually

So, let’s just walk through the whole thing, end to end. Here’s a twelve-step program for understanding game design. One: Fun There are a lot of things people call “fun.” But most of them are not u…

raphkoster.com iconraphkoster.com

I think the headline is a hard stance, but I appreciate the sentiment. All the best designers and creatives—including developers—I’ve ever worked with do things on the side. Or in Rohit Prakash’s words, they tinker. They’re always making something, learning along the way.

Prakash, writing in his blog:

Acquiring good taste comes through using various things, discarding the ones you don’t like and keeping the ones you do. if you never try various things, you will not acquire good taste.

It’s important for designers to see other designs and use other products—if you’re a software designer. It’s equally important to look up from Dribbble, Behance, Instagram, and even this blog and go experience something unrelated to design. Art, concerts, cooking. All of it gets synthesized through your POV and becomes your taste.

Large white text "@seatedro on x dot com" centered on a black background.

If you don’t tinker, you don’t have taste

programmer by day, programmer by night.

seated.ro iconseated.ro

Ethan Mollick, a professor of entrepreneurship at the Wharton School says that AI has gotten so good that our relationship with them is changing. “We’re moving from partners to audience, from collaboration to conjuring,” he says.

He fed NotebookLM his book and 140 Substack posts and asked for a video overview. AI famously hallucinates. But Mollick found no factual errors in the six-minute video.

We’re shifting from being collaborators who shape the process to being supplicants who receive the output. It is a transition from working with a co-intelligence to working with a wizard. Magic gets done, but we don’t always know what to do with the results. This pattern — impressive output, opaque process — becomes even more pronounced with research tasks.

Mollick believes that the most wizard-like model today is GPT-5 Pro. He uploaded an academic paper that took him a year to write, which was peer-reviewed, and was then published in a major journal…

Nine minutes and forty seconds later, I had a very detailed critique. This wasn’t just editorial criticism, GPT-5 Pro apparently ran its own experiments using code to verify my results, including doing Monte Carlo analysis and re-interpreting the fixed effects in my statistical models. It had many suggestions as a result (though it fortunately concluded that “the headline claim [of my paper] survives scrutiny”), but one stood out. It found a small error, previously unnoticed. The error involved two different sets of numbers in two tables that were linked in ways I did not explicitly spell out in my paper. The AI found the minor error, no one ever had before.

Later in his post, Mollick says that there’s a problem with this wizardry—it’s too opaque. So what can we do?

First, learn when to summon the wizard versus when to work with AI as a co-intelligence or to not use AI at all. AI is far from perfect, and in areas where it still falls short, humans often succeed. But for the increasing number of tasks where AI is useful, co-intelligence, and the back-and-forth it requires, is often superior to a machine alone. Yet, there are, increasingly, times when summoning a wizard is best, and just trusting what it conjures.

Second, we need to become connoisseurs of output rather than process. We need to curate and select among the outputs the AI provides, but more than that, we need to work with AI enough to develop instincts for when it succeeds and when it fails.

And lastly, trust it. Trust the technology, he suggests. “The question isn’t ‘Is this completely correct?’ but ‘Is this useful enough for this purpose?’”

I think we’re in that transition period. AI is indeed dastardly great at some things and constantly getting better at the tasks it’s not. But we all know where this is headed.

Witch hat hovering over a desktop monitor with circuit-like lines flowing into the screen, small coffee mug on the desk.

On Working with Wizards

Verifying magic on the jagged frontier

oneusefulthing.org icononeusefulthing.org

Remote work really exploded when the Covid-19 pandemic hit. Everyone had to adjust to working from home, relying on Zoom and Slack and other collaborative tools much more. But beyond tooling, there’s also process. Matt Mullenweg, CEO of Automattic, has famously been a proponent of distributed work for a while.

Paolo Belcastro peels back the curtain to share how the 1,500 or so global employees of Automattic stay connected via two core principles:

There are two ideas that define our communication culture:

Radical Transparency: we default to openness, with every conversation accessible to everyone in the company. Asynchronous by Design: we don’t expect everyone to be “on” at the same time.

Everything is written down:

Our internal platform, P2, started life as a WordPress theme (it was called Prologue, later updated to version 2 and eventually shortened to P2) that lets people post directly on the front end of a site—fast, simple, and visible to everyone. Over time it evolved into a network of thousands of P2s for teams, projects, and watercooler chats (couch surfing, classified ads, house renovations, babies, pets, music, or games, we kind of have it all).

Every post, every comment, every decision ever made in the history of Automattic is preserved there.

As you can imagine, it soon becomes a volume problem. There’s too much stuff.

No one can read everything.

That’s why onboarding is designed to help people adapt:

  • Each newcomer is paired with a mentor from a different team, to give them a cross-company perspective.
  • They receive a curated list of “milestone posts” that map the history of Automattic, along with role-specific threads relevant to their work.
  • The Field Guide offers principles, templates, and advice about how to handle communication.

Somehow, they make it work.

Using chaos to communicate order

Using chaos to communicate order

How we communicate at Automattic

ttl.blog iconttl.blog

Building on Matthew Ström-Awn’s argument that true quality emerges from decentralized, ground-level ownership, Sean Goedecke writes an essay exploring how software companies navigate the tension between formalized control and the informal, often invisible work that actually drives product excellence.

But first, what does legibility even mean?

What does legibility mean to a tech company, in practice? It means:

  • The head of a department knows, to the engineer, all the projects the department is currently working on
  • That head also knows (or can request) a comprehensive list of all the projects the department has shipped in the last quarter
  • That head has the ability to plan work at least one quarter ahead (ideally longer)
  • That head can, in an emergency, direct the entire resources of the department at immediate work

Note that “shipping high quality software” or “making customers happy” or even “making money” is not on this list. Those are all things tech companies want to do, but they’re not legibility.

Goedecke argues that while leaders prize formal processes and legibility to facilitate predictability and coordination, these systems often overlook the messier, less measurable activities that drive true product quality and user satisfaction.

All organizations - tech companies, social clubs, governments - have both a legible and an illegible side. The legible side is important, past a certain size. It lets the organization do things that would otherwise be impossible: long-term planning, coordination with other very large organizations, and so on. But the illegible side is just as important. It allows for high-efficiency work, offers a release valve for processes that don’t fit the current circumstances, and fills the natural human desire for gossip and soft consensus.

Seeing like a software company

The big idea of James C. Scott’s Seeing Like A State can be expressed in three points: Modern organizations exert control by maximising “legibility”: by…

seangoedecke.com iconseangoedecke.com

Matt Ström-Awn makes the argument that companies can achieve sustainable excellence by empowering everyone at each level to take ownership of quality, rather than relying solely on top-down mandates or standardized procedures.

But more and more I’ve come to believe that quality isn’t a slogan, a program, or a scorecard. It’s a promise kept at the edge by the people doing the work. And, ideally, quality is fundamental to the product itself, where users can judge it without our permission. That’s the shift we need: away from heroics at the center, toward systems that make quality inevitable.

The stakes are high. Centralized quality — slogans, KPIs, executive decrees — can produce positive results, but it’s brittle. Decentralized quality — continuous feedback, distributed ownership, emergent standards — builds resilience. In this essay, I’d like to make the case that the future belongs to those who can decentralize their mindset and approach to quality.

Ström-Awn offers multiple case studies, contrasting centralized systems with decentralized ones, using Ford, Amazon, Apple, Toyota, Netflix, 3M, Morning Star, W.L. Gore, Valve, Barnes & Noble, and Microsoft under Satya Nadella as examples.

These stories share a common thread: organizations that trusted their frontline workers to identify and solve quality problems. But decentralized quality has its own vulnerabilities. Valve’s radical structure has been criticized for creating informal power hierarchies and making it difficult to coordinate large projects. Some ex-employees describe a “high school clique” atmosphere where popular workers accumulate influence while others struggle. Without traditional management oversight, initiatives can moulder, or veer in directions that don’t serve broader company goals.

Still, these examples show a different path for achieving quality, where excellence is defined in the course of building a product. Unlike centralized approaches relying on visionary (but fallible) leaders, decentralized systems are resilient to individual failures, adaptable to change, and empowering to builders. The andon cord, the rolling desk, and the local bookstore manager each represent a small bet on human judgment over institutional control. Those bets look like they’re paying off.

Decentralizing quality

Decentralizing quality

Why moving judgment to the edges wins in the long run

matthewstrom.com iconmatthewstrom.com

As UX designers, we try to anticipate the edge cases—what might a user do and how can we ensure they don’t hit any blockers. But beyond the confines of the products we build, we must also remember to anticipate the unintended consequences. How might this product or feature affect the user emotionally? Are we creating bad habits? Are we fomenting rage in pursuit of engagement?

Martin Tomitsch and Steve Baty write in DOC, suggesting some frameworks to anticipate the unpredictable:

Chaos theory describes the observation that even tiny perturbations like the flutter of a butterfly can lead to dramatic, non-linear effects elsewhere over time. Seemingly small changes or decisions that we make as designers can have significant and often unforeseen consequences.

As designers, we can’t directly control the chain of reactions that will follow an action. Reactions are difficult to predict, as they occur depending on factors beyond our direct control.

But by using tools like systems maps, the impact ripple canvas, and iceberg visuals, we can take potential reactions out of the unpredictable pile and shift them into the foreseeable pile.

The UX butterfly effect

The UX butterfly effect

Understanding unintended consequences in design and how to plan for them.

doc.cc icondoc.cc

Sticking with the workslop or outsourcing our main work to AI, Douglas Rushkoff writes in Fast Company:

By using the AI to do the big stuff—by outsourcing our primary competencies to the machines instead of giving them the boring busywork—we deskill ourselves and deprive everyone of the opportunity for AI-enhanced outputs. Too many of us are using AI as the primary architect for a project, rather than the general contractor who supports the architect’s human vision.

People forget that it’s the process of doing something that helps us synthesize and form the connections necessary for innovation.

As the researcher behind MIT’s study “This is Your Brain on ChatGPT” explained at a recent ANDUS event, when people turn to an AI for a solution before working on a problem themselves, the number of connections formed in their brains decreases. But when they turn to the AI after working on the problem for a while, they end up with more neural connections than if they worked entirely alone.

That’s because the value of the AI is not its ability to create product for us, but to engage with us in our process. Working and iterating with an AI—doing what we could call generative thinking—is actually a break from Industrial Age thinking. We focus less on outputs than on cycles. Less on the volume of short-term results (assembly line), and more on the quality and complexity of thought and innovation.

The value of the AI is not its ability to create product for us, but to engage with us in our process

The value of the AI is not its ability to create product for us, but to engage with us in our process

AI doesn’t have to replace our competencies or even our employees.

fastcompany.com iconfastcompany.com

Speaking of workslop, here’s an article from NN/g on how to avoid falling into over-reliance on AI in our design field. They call it the “7 Deadly AI Sins for UX Professionals.”

  1. Outsourced Thinking
  2. Wasted Time
  3. Lost Details
  4. Isolated Ideation
  5. Naïve Trust
  6. Bland Taste
  7. Defensive Outlook

As Tanner Kohler writes:

It’s not about avoiding AI. It’s about maintaining your own growth and the quality of your work as you use AI. AI will constantly be changing. Never let yourself slip into repeatedly committing the sins that weaken you and your UX skills.

7 Deadly AI Sins for UX Professionals

7 Deadly AI Sins for UX Professionals

Succumbing to AI temptations weakens your UX skills. Strive for the AI virtues to keep yourself strong as you use AI in your work.

nngroup.com iconnngroup.com