Skip to content

It’s January and by now millions of us have made resolutions and probably broken them already. The second Friday of January is known as “Quitter’s Day.”

OKRs—objectives and key results—are a method for businesses to set and align company goals. The objective is your goal and the KRs are the ways to reach your goals. Venture capitalist John Doerr learned about OKRs while working at Intel, brought it to Google, and later became the framework’s leading evangelist.

Christina Wodtke talks about how to use OKRs for your personal life, and maybe as a way to come up with better New Year’s resolutions. She looked at her past three years of personal OKRs:

Looking at the pattern laid out in front of me, I finally saw what I’d been missing. My problem wasn’t work-life balance. My problem was that I didn’t like the kind of work I was doing.

The key results kept failing because the objective was wrong. It wasn’t about balance. It was about joy.

This is the second thing key results do for you: when they consistently fail, they’re telling you something. Not that you lack discipline—that you might be chasing the wrong goal entirely.

And I love Wodtke’s line here: “New Year’s resolutions fail because they’re wishes, not plans.“ She continues:

They fail because “eat better” and “be healthier” and “find balance” are too vague to act on and too fuzzy to measure.

Key results fix this. Not because measurement is magic, but because the act of measuring forces clarity. It makes you confront what you actually want. And sometimes, when the data piles up, it reveals that what you wanted wasn’t the thing you needed at all.

Your Resolution Isn’t the Problem. Your Measurement Is.

Your Resolution Isn’t the Problem. Your Measurement Is.

It’s January, and millions of people have made the same resolution: “Eat better.” By February, most will have abandoned it. Not because they lack willpower or discipline. Because …

eleganthack.com iconeleganthack.com

Building on our earlier link about measuring the impact of features, how can we keep track of the overall health of the product? That’s where a North Star Metric comes in.

Julia Sholtz writes and introduction to North Star Metrics in the analytics provider Amplitude’s blog:

Your North Star Metric should be the key measure of success for your company’s product team. It defines the relationship between the customer problems your product team is trying to solve and the revenue you aim to generate by doing so.

How is it done? The first step is to figure out the “game” your business is playing: how your business engages with customers:

  1. The Attention Game: How much time are your customers willing to spend in your product?
  2. The Transaction Game: How many transactions does your user make on your platform?
  3. The Productivity Game: How efficiently and effectively can someone get their work done in your product?

They have a whole resource section on this topic that’s worth exploring.

Every Product Needs a North Star Metric: Here’s How to Find Yours

Every Product Needs a North Star Metric: Here’s How to Find Yours

Get an introduction to product strategy with examples of North Star Metrics across industries.

amplitude.com iconamplitude.com

How do we know what we designed is working as intended? We measure. Vitaly Friedman shares something called the TARS framework to measure the impact of features.

We need UX metrics to understand and improve user experience. What I love most about TARS is that it’s a neat way to connect customers’ usage and customers’ experience with relevant product metrics.

Here’s TARS in a nutshell:

  • Target Audience (%): Measures the percentage of all product users who have the specific problem that a feature aims to solve.
  • Adoption (%): Tracks the percentage of the target audience that successfully and meaningfully engages with the feature.
  • Retention (%): Assesses how many users who adopted the feature continue to use it repeatedly over time.
  • Satisfaction Score (CES): Gauges the level of satisfaction, specifically how easy it was for retained users to solve their problem after using the feature.

Friedman has more details in the article, including how to use TARS to measure how well a feature is performing for your intended target audience.

How To Measure The Impact Of Features — Smashing Magazine

How To Measure The Impact Of Features

Meet TARS — a simple, repeatable, and meaningful UX metric designed specifically to track the performance of product features. Upcoming part of the Measure UX & Design Impact (use the code 🎟 IMPACT to save 20% off today).

smashingmagazine.com iconsmashingmagazine.com

I really appreciate the perspective of Lai-Jing Chu here as a Silicon Valley veteran. The struggle to prove the value of design is real.

I don’t know another function or role in the tech industry where it seems like we have to do our jobs at the same time as — and I will avoid saying “demonstrating value” here because it’s more than that — we carry out some sort of divine duty to make the product (let alone the world) a better place through our creativity.

Instead of more numbers like ROI calculations, Chu argues for counterintuitive approaches for advocacy, “not more left-brain exercises.”

Chu introduces us to W. Edwards Deming, an influential management consultant who wrote:

The most important figures needed for management of any organization are unknown and unknowable, but successful management must nevertheless take account of them.

One strategy she offers is to ask leadership a common-sense question: How would you grade the design?

Because when was the last time anyone did the most basic thing — to stop for a moment, hold the product in their hands, and take a good hard look at it? These questions throw the ball back in their court. It makes them wonder what they can do to help. Because chances are, most leaders want their product to have a good user or customer experience and understand that it makes a difference to their business success. You don’t just want buy-in — you want them to have true ownership.

I admire this approach, because chances are, leaders are already hearing about UX issues from customers. But to put this into practice in, let’s say, at any startup post-Series A will be an issue. There’s a lot of coordination and alignment that needs to happen because exec-level attention is much harder to come by.

What can’t be measured could break your business

What can’t be measured could break your business

Burned out from proving design’s value? Let’s change the conversation

uxdesign.cc iconuxdesign.cc

Here is a good reminder from B. Prendergast to “stop asking users what they want—and start watching what they do.”

Asking people what they want is one of the most natural instincts in product work. Surveys, interviews, and feature wish lists feel accessible, social, and collaborative. They open channels to understand and empathise with the user base. They help teams feel closer to the people they serve. For teams under pressure, a stack of opinions can feel like solid data.

But this breaks when we compare what users say to what they actually do (say-do gap).

We all want to present ourselves a certain way. We want to seem more competent than confused (social desirability bias). Our memories can be fuzzy, especially about routine tasks (recall bias). Standards for what feels “easy” or “intuitive” can vary wildly between people (reference bias).

And of course, as soon as we start to ask users to imagine what they’d want, they’ll solve based on their personal experiences—which might be the right solution for them, but might not be for other users in the same situation.

Prendergast goes on to suggest “watch what people do, measure what matters, and use what they say to add context.” This approach involves watching user interactions, analyzing real behaviors through analytics, and treating feature requests as signals of underlying problems to uncover genuine needs. Prioritizing decisions based on observed patterns and desired outcomes leads to more effective solutions than relying on user opinions alone.

Stop asking users what they want — and start watching what they do. - Annotated

Stop asking users what they want — and start watching what they do.

People’s opinions about themselves and the things they use rarely match real behaviour.

renderghost.leaflet.pub iconrenderghost.leaflet.pub

Product manager Adrian Raudaschl offered some reflections on 2025 from his point of view. It’s a mixture of life advice, product recommendations, and thoughts about the future of tech work.

The first quote I’ll pull out is this one, about creativity and AI:

Ultimately, if we fail to maintain active engagement with the creative process and merely delegate tasks to AI without reflection, there is a risk that delegation becomes abdication of responsibility and authorship.

”Active engagement” with the tasks that we delegate to AI. This reminds me of the humble machines argument by Dr. Maya Ackerman.

On vibe coding:

The most important thing, I think, that most people in knowledge work should be doing is learning to vibe code. Vibe code anything: a diary, a picture book for your mum, a fan page for your local farm. Anything. It’s not about learning to code, but rather appreciating how much more we could do with machines than before. This is what I mean about the generalist product manager: being able to prototype, test, and build without being held back by technical constraints.

I concur 100%. Even if you don’t think you’re a developer, even if you don’t quite understand code, vibe coding something will be illuminating. I think it’s different than asking ChatGPT for a bolognese sauce recipe or how to change a tire. Building something that will instantly run on your computer and seeing the adjustments made in real-time from your plain English prompts is very cool and gives you a glimpse into how LLMs problem-solve.

A product manager’s 48 reflections on 2025

A product manager’s 48 reflections on 2025

and why I’ve been making Bob Dylan songs about Sonic the Hedgehog

uxdesign.cc iconuxdesign.cc

Yesterday, Anthropic launched Cowork, a research preview that is essentially Claude Code but for non-coders.

From the blog announcement:

How is using Cowork different from a regular conversation? In Cowork, you give Claude access to a folder of your choosing on your computer. Claude can then read, edit, or create files in that folder. It can, for example, re-organize your downloads by sorting and renaming each file, create a new spreadsheet with a list of expenses from a pile of screenshots, or produce a first draft of a report from your scattered notes.

In Cowork, Claude completes work like this with much more agency than you’d see in a regular conversation. Once you’ve set it a task, Claude will make a plan and steadily complete it, while looping you in on what it’s up to. If you’ve used Claude Code, this will feel familiar—Cowork is built on the very same foundations. This means Cowork can take on many of the same tasks that Claude Code can handle, but in a more approachable form for non-coding tasks.

Apparently, Cowork was built very quickly using—naturally—Claude Code. Michael Nuñez in VentureBeat:

…according to company insiders, the team built the entire feature in approximately a week and a half, largely using Claude Code itself.

Alas, this is only available to Claude Max subscribers ($100–200 per month). I will need to check it out when it’s more widely available.

White jagged lightning-shape on a terracotta background with a black zigzag line connecting three solid black dots.

Introducing Cowork | Claude | Claude

Claude Code’s agentic capabilities, now for everyone. Give Claude access to your files and let it organize, create, and edit documents while you focus on what matters.

claude.com iconclaude.com

AI threatens to let product teams ship faster. Faster PRDs, faster designs, and faster code. But going too fast can often lead to incurring design and tech debt, or even worse, shipping the wrong thing.

Anton Sten sagely warns:

The biggest pattern I have seen across startups is that skipping clarity never saves time. It costs time. The fastest teams are not the ones shipping the most. They are the ones who understand why they are shipping. That is the difference between moving for the sake of movement and moving with purpose. It is the difference between speed and true velocity.

How do you avoid this? Sten:

The reset is simple and almost always effective. Before building anything, pause long enough to ask, “What problem am I solving, and for whom?” It sounds basic, but this question forces alignment. It replaces assumptions with clarity and shifts attention back to the user instead of internal preferences. When teams do this consistently, the entire atmosphere changes. Decisions become easier. Roadmaps make more sense. People contribute more of themselves. You can feel momentum return.

The hidden cost of shipping too fast

Speed often gets treated as progress even when no one has agreed on what progress actually means. Here’s why clarity matters more than velocity.

antonsten.com iconantonsten.com

Graphic designer Emily Sneddon performed some impressive typographic anthropology to digitize the font used on San Francisco’s MUNI light rail cars. She tracked down the original engineer who worked on the destination display signs for a company called Trans-Lite.

Sneddon, on her blog:

Learning that the alphabet came from an engineer really explains its temperament and why I was drawn to it in the first place. The signs were designed for sufficiency: fixed segments, fixed grid, and no extras. Characters were created only as destinations required them, while other characters, like the Q, X, and much of the punctuation, were never programmed into the signs. In reducing everything to its bare essentials, somehow character emerged, and it’s what inspired me to design Fran Sans.

Rows of rectangular black-and-white tiled blocks forming a geometric uppercase alphabet, digits, and symbols using white grid lines.

Fran Sans by Emily Sneddon

Fran Sans: Emily Sneddon digitizes Muni's Breda 3x5 display alphabet, traces Trans-Lite engineer Gary Wallberg, preserving a fading San Francisco type.

emilysneddon.com iconemilysneddon.com

Echoing my series on the design talent crisis and other articles warning against the practice of cutting back on junior talent, Vaughn Tan offers yet another dimension: subjective decision-making skills are only honed through practice. But the opportunities given to junior staff for this type of decision-making are slim.

But to back up, here’s Tan explaining what subjective decision-making is:

These are decisions where there’s no one “correct” answer and the answers that work can’t be known in advance. Subjective decisionmaking requires critical thinking skills to make strongly reasoned arguments, identify appropriate evidence, understand the tradeoffs of different arguments, make decisions that may (or may not) be correct, and develop compelling narratives for those decisions.

While his article isn’t about AI nor is it about companies not hiring juniors, it is about companies not developing juniors and allowing them to practice this type of decision-making in low-stakes situations.

Critical thinking and judgment require practice. Practice needs to be frequent, and needs to begin at a low level with very few consequences that are important. This small-scale training in subjective decisionmaking and critical thinking is the best way to learn how to do it properly in more consequential situations.

If you wait until someone is senior to teach judgment, their first practice attempts have serious consequences. High-stakes decisionmaking pressure cannot be simulated realistically; learning how to deal with it requires actual practice with real consequences that progressively increases in scope and consequentiality.

And why is this all important? Not developing junior staff means there will be a bottleneck issue—only seniors can make these judgement calls—and one day, there will be a succession problem, i.e., who takes over when the seniors leave or retire.

Judgment from the ground up

tl;dr: Critical thinking is foundational for making decisions that require subjective judgment. People learn how to do subjective decisionmaking

vaughntan.org iconvaughntan.org

This short article could easily fall under the motivational category, but I couldn’t help but draw parallels to what we do as designers when working as part of a product team.

Hazel Weakly says that people who see systems also tend to become in charge of them, sooner or later. And to be a leader is to “understand that you’ll find yourself stranded in the middle of the ocean one day.”

Not just you, but everyone you lead. And you’ll need to chart a course. In the ever-changing winds, the ever-shifting tides, the unknown weather, and with an inability to see up or down or basically anywhere except a few minutes away. You won’t have the time to find your bearings even if you could. Yet, somehow, in this sea of swirling and infinite complexities and probabilities, in the midst of incalculable odds, you will find yourself needing to have simultaneously several different things…

The first thing is that you will fail. I equate this to knowing that design is about trial and error, testing and measuring, and then adjusting.

The second thing is unshakable conviction that “you will succeed.” I see success as solving the problem, coming up with a solution that helps users do what they need. And you know what? Designers will succeed when they follow the design process.

Finally, the third thing is to “prepare and make ready everyone around you.” Which means influencing your product and engineering counterparts and other stakeholders that the solution your advocating for is the right one.

To Be a Leader of Systems | Hazel Weakly

To Be a Leader of Systems

Picture with me, if you will, the absurdity of finding yourself swimming in the middle of the ocean. First think about the ocean and how deep and infinitely…

hazelweakly.me iconhazelweakly.me

One of the most interesting things about design systems is how many of them are public—maybe not open source, but public so that we can all learn from them.

The earliest truly public, documented design systems showed up in the early 2010s. There isn’t a single “first,” but a few set the tone. GOV.UK published openly and became the public‑sector benchmark. Google’s Material landed in 2014 with a comprehensive spec. Salesforce’s Lightning started surfacing around 2013–2014 and matured mid‑decade. IBM’s Carbon followed soon after. Earlier frameworks like Bootstrap and Foundation (2011) acted like de facto systems for many teams, but they weren’t a company’s product design system made public.

PJ Onori says that public design systems are a “marketplace of ideas.”

Public design systems have lifted all boats in the harbor. Most design system teams do the rounds to see how other teams have tackled problems. Every system that raises bar puts healthy pressure on others to meet or exceed it. This shared ecosystem may be the most important facet of the design systems practice.

Onori also says that there may be a growing trend to shut down public design systems:

There’s a growing trend to close down public systems. Funny enough, the first thing I did when I left Pinterest was clone the Gestalt repo. I had this spidey sense it wouldn’t be around forever. Yes, their web codebase is still open source, but the docs have gone private. That one stung. Gestalt wasn’t the first design system to be public. It wasn’t the best one either. But it’s hat was in the ring–and that’s what mattered.

But that’s only one design system, right? Sadly, I’m hearing more chatting about mounting pressure to privatize their systems.

This is an incredibly shitty idea.

Why? Because that’s how we all learn from each other. That’s how something like the Component Gallery can exist as a resource for all of us.

Open design systems are the library for people wanting to get into design systems. They’re a free resource to expand their understanding. There’s no college of design systems. Bootcamps exist, but they’re bootcamps–and I’ll leave it at that. The generation who shaped design systems didn’t create universities–they built libraries. Those libraries can train the next generation once people like me age out. When the libraries go, so does the transfer of knowledge.

Public design systems are worth it

Public design systems are worth it

It’s incredibly valuable to make a design system available to all–no matter what the bean-counters say.

pjonori.blog iconpjonori.blog

There’s a myth that B2B marketing needs to be boring. Wrong. I’ve long believed that B2B advertising and marketing can and should be more consumer-like because at the end of the day, it’s a human on the other side of that message that needs to receive it. Sure, the buying cycle and decision-making is different, but the initial recipient is one person.

Creative director Scott McGuffie agrees, arguing in PRINT Magazine:

The best B2B work today doesn’t look different for the sake of it; it feels relevant to the world around it. Whether through wit, humanity, storytelling, or design, great B2B work connects to the same sensibilities that drive consumer creativity, allowing B2B to show up in new spaces, such as entertainment streaming services, once considered only a B2C space. It proves that professionalism and imagination are not mutually exclusive.

B2B Doesn’t Need to Be Dull – PRINT Magazine

B2B Doesn’t Need to Be Dull

Expectations say that B2B campaigns must be rational and serious, while B2C are creative and emotional. Yet that no longer reflects the world we live in.

printmag.com iconprintmag.com

Imagine working for seven years designing the prototyping features at Figma and then seeing GPT-4 and realizing what AI can soon do in the future. That’s the story of Figma designer–turned–product manager Nikolas Klein. He shares his journey via a lovely illustrated comic—Webtoon style.

Klein emphasizes:

The truth is: There will always be new problems to solve. New ideas to take further. Even with AI, hard problems are still hard. An answer may come faster, but it’s not always right.

Hard Problems Are Still Hard: A Story About the Tools That Change and the Work That Doesn’t | Figma Blog

Hard Problems Are Still Hard: A Story About the Tools That Change and the Work That Doesn’t | Figma Blog

Figma designer–turned–product manager Nikolas Klein worked on building prototyping tools for seven years. Then AI changed the game.

figma.com iconfigma.com

We’ve been feeling it for a while. AI-generated posts and comments filling up the feeds on LinkedIn. Em dashes were said to be the tell that AI wrote the content. Other patterns are easy to spot, like overuse of emojis in headings and my personal most-hated, the “it’s not X, it’s Y.” That type of construction is called an antithesis and it’s exploded. And now that I’ve pointed it out, I’m sure you’ll notice it everywhere too. Sorry, not sorry.

Sam Kriss, exploring why AI writes the way it does:

A lot of A.I.’s choices make sense when you understand that it’s…trying to write well. It knows that good writing involves subtlety: things that are said quietly or not at all, things that are halfway present and left for the reader to draw out themselves. So to reproduce the effect, it screams at the top of its voice about how absolutely everything in sight is shadowy, subtle and quiet. Good writing is complex. A tapestry is also complex, so A.I. tends to describe everything as a kind of highly elaborate textile. Everything that isn’t a ghost is usually woven. Good writing takes you on a journey, which is perhaps why I’ve found myself in coffee shops that appear to have replaced their menus with a travel brochure. “Step into the birthplace of coffee as we journey to the majestic highlands of Ethiopia.” This might also explain why A.I. doesn’t just present you with a spreadsheet full of data but keeps inviting you, like an explorer standing on the threshold of some half-buried temple, to delve in.

All of this contributes to the very particular tone of A.I.-generated text, always slightly wide-eyed, overeager, insipid but also on the verge of some kind of hysteria. But of course, it’s not just the words — it’s what you do with them. As well as its own repertoire of words and symbols, A.I. has its own fundamentally manic rhetoric. For instance, A.I. has a habit of stopping midway through a sentence to ask itself a question. This is more common when the bot is in conversation with a user, rather than generating essays for them: “You just made a great point. And honestly? That’s amazing.”

Why Does A.I. Write Like … That?

Why Does A.I. Write Like … That?

(Gift Link) If only they were robotic! Instead, chatbots have developed a distinctive — and grating — voice.

nytimes.com iconnytimes.com
Storyboard grid showing a young man and family: kitchen, driving, airplane, supermarket, night house, grill, dinner.

Directing AI: How I Made an Animated Holiday Short

My first taste of generating art with AI was back in 2021 with Wombo Dream. I even used it to create very trippy illustrations for a series I wrote on getting a job as a product designer. To be sure, the generations were weird, if not even ugly. But it was my first test of getting an image by typing in some words. Both Stable Diffusion and Midjourney gained traction the following year and I tried both as well. The results were never great or satisfactory. Years upon years of being an art director had made me very, very picky—or put another way, I had developed taste.

I didn’t touch generative AI art again until I saw a series of photos by Lars Bastholm playing with Midjourney.

Child in yellow jacket smiling while holding a leash to a horned dragon by a park pond in autumn.

Lars Bastholm created this in Midjourney, prompting “What if, in the 1970s, they had a ‘Bring Your Monster’ festival in Central Park?”

That’s when I went back to Midjourney and started to illustrate my original essays with images generated by it, but usually augmented by me in Photoshop.

In the intervening years, generative AI art tools had developed a common set of functionality that was all very new to me: inpainting, style, chaos, seed, and more. Beyond closed systems like Midjourney and OpenAI’s DALL-E, open source models from Stable Diffusion, Flux, and now a plethora of Chinese models offer even better prompt adherence and controllability via even more opaque-sounding functionality like control nets, LoRAs, CFG, and other parameters. It’s funny to me that for a very artistic field, the associated products to enable these creations are very technical.

My Site Stats for 2025

In 2025, I published 328 posts with a total of 118,445 words on this blog. Of course, in most of the posts, I’m quoting others, so excluding block quotes—those quoted passages greater than a sentence—I’m down to 76,226 words. Still pretty impressive, I’d say.

Post analysis 2025 - 328 posts. Top months: Oct 45, Jul 42, Mar 4. Link posts 283 (86%). Total words 118,445, avg 361.

I used Claude Code to write a little script that analyzed my posts from last year.

In reviewing data from my analytics package Umami, it is also interesting which posts received the most views. By far it was “Beyond the Prompt,” my AI prompt-to-code shootout article. The others in the top five were:

That last one has always surprised me. I must’ve hit the Google lottery on it for some reason.

Speaking of links, since April—no data before—visitors clicked on links mentioned on this blog 2,949 times. I also wanted to see which linked items were most popular, by outbound clicks:

  1. AI 2027, naturally
  2. Smith & Diction’s catalog of “Usable Google Fonts
  3. Matt Webb’s post on Do What I Mean
  4. A visualization called “The Authoritarian Stack” that shows how power, money, and companies connect
  5. The New York Times list of the “25 Most Influential Magazine Covers of All Time” (sadly, the gift link has since expired)

And finally, the totals of the year for views were 58,187, with 42,075 visitors. That works out to be an average of about 3,500 visitors per month. Tiny compared with other blogs out there. But my readers mean the world to me.

Anyway, some interesting stats, at least to me. Here’s to more in 2026.

Foggy impressionist painting of a steam train crossing a bridge, plume of steam and a small rowboat on the river below.

The Year AI Changed Design

At the beginning of this year, AI prompt-to-code tools were still very new to the market. Lovable had just relaunched in December and Bolt debuted just a couple months before that. Cursor was my first taste of using AI to code back in November of 2024. As we sit here in December, just 12 months later, our profession and the discipline of design has materially changed. Now, of course, the core is still the same. But how we work, how we deliver, and how we achieve results, are different.

When ChatGPT got good (around GPT-4), I began using it as a creative sounding board. Design is never a solitary activity and feedback from peers and partners has always been a part of the process. To be able to bounce ideas off of an always-on, always-willing creative partner was great. To be sure, I didn’t share sketches or mockups; I was playing with written ideas.

Now, ChatGPT or Gemini’s deep research features are often where I start when I begin to tackle a new feature. And after the chatbot has written the report, I’ll read it and ask a lot of questions as a way of learning and internalizing the material. I’ll then use that as a jumping off point for additional research. Many designers on my team do the same.

I’ve linked to a footer gallery, a navbar gallery, and now to round us out, here is a full-on Component Gallery. Web developer Iain Bean has been maintaining this library since 2019.

Bean writes in the about page:

The original idea for this site came from A Pattern Language2, a 1977 book focused on architecture, building and planning, which describes over 250 ‘patterns’: forms which fit specific contexts, or to put it another way, solutions to design problems. Examples include: ‘Beer hall’, ‘Positive outdoor space’ and ‘Light on two sides of every room’.

Whereas the book focuses on the physical world, my original aim with this site is was focus on those patterns that appear on the web; these often borrow the word ‘pattern’ (see Patterns on the GOV.UK design system), but are more commonly called components, hence ‘the component gallery’ — unlike a component library, most of these components aren’t ready to use off-the-shelf, but they’ll hopefully inspire you to design your own solution to the problem you’re working to solve.

So if you ever need a reference for how different design systems handle certain components (e.g., combobox, segmented control, or toast ), this is your site.

The Component Gallery

The Component Gallery

An up-to-date repository of interface components based on examples from the world of design systems, designed to be a reference for anyone building user interfaces.

component.gallery iconcomponent.gallery

One more post down memory lane. Phil Gyford chronicled his first few months online, thirty years ago in 1995. He talks of modems, floppies, email, Usenet, IRC, and friendly strangers on the internet.

I had forgotten how onerous it was to get online back then. Gyford writes:

It’s hard to convey how difficult it was to set things up. So new and alien to me. When reading computer magazines I’d always skipped articles about networking and while the computers at university had been connected together, that was only for the purposes of printing, scanning and transferring files.

First there was the issue of getting online at all. The Internet Starter Kit spent 59 pages explaining how to set up MacTCP, and PPP or SLIP, two different methods of connecting to the internet, the differences of which happily escape me now. I spent a lot of late nights fiddling with control panels and extensions, learning about IP addresses, domain name servers, etc.

And Gyford reminds us just how marvelous the invention of the internet was:

Before the web – and all the rest of it – how could you have shared your words with anyone? Write a letter to a newspaper or magazine and hope they published it a few days or months later? Create your own fanzine and distribute copies one-by-one to strangers, and posted in individually addressed and stamped envelopes? That was it, unless you were going to become a successful journalist or writer. Your reach, your world, was tiny.

But now, then, you could put anything you wanted on your own website and instantly it was visible by anyone in the world. OK, anyone in the world who was also online, which wasn’t many then, and they were all quite similar, but, still… they could be anywhere! And their number was growing.

And you could chat to people in real time and it didn’t matter where they were, they were here in front of you. Send emails back-and-forth to friends without writing letters, and buying stamps, and waiting days or weeks for a response. Instant! Weightless!

The post is worth a read. It’s complete with pictures of some artifacts from that time, including newspaper clippings, invoices, and journal entries.

My first months in cyberspace

My first months in cyberspace

Recalling the difficulties and wonder of getting online for the first time in 1995, including diary extracts from the time.

gyford.com icongyford.com

If you were into computers like I was between 1975 and 1998, you read Byte magazine. It wasn’t just product reviews and spec sheets—Byte offered serious technical depth, covering everything from assembly language programming to hardware architecture to the philosophy of human-computer interaction. The magazine documented the PC revolution as it happened, becoming required reading for anyone building or thinking deeply about the future of computing. It was also thick as hell.

Someone made a visual archive of Byte magazine, showing each page of the printed pages in a zoomable interface:

Before Hackernews, before Twitter, before blogs, before the web had been spun, when the internet was just four universities in a trenchcoat, there was BYTE. A monthly mainline of the entire personal computing universe, delivered on dead trees for a generation of hackers. Running from September 1975 to July 1998, its 277 issues chronicled the Cambrian explosion of the microcomputer, from bare-metal kits to the dawn of the commercial internet. Forget repackaged corporate press releases—BYTE was for the builders.

It’s a fun glimpse into the past before thin laptops, smartphones, and disco-colored gaming PCs.

Grid collage of vintage technology magazine pages and ads, featuring colorful retro layouts, BYTE covers and articles.

Byte - a visual archive

Explore a zoomable visual archive of BYTE magazine: all 277 issues (Sep 1975 - Jul 1998) scanned page-by-page, a deep searchable glimpse into the PC revolution.

byte.tsundoku.io iconbyte.tsundoku.io

The Whole Earth Catalog, published by Stewart Brand several times a year between 1968 and 1972 (and occasionally until 1998), was the internet before the internet existed. It curated tools, books, and resources for self-education and DIY living, embodying an ethos of access to information that would later define the early web. Steve Jobs famously called it “one of the bibles of my generation,” and for good reason—its approach to democratizing knowledge and celebrating user agency directly influenced the philosophy of personal computing and the participatory culture we associate with the web’s early days.

Curated by Barry Threw and collaborators, the Whole Earth Index is a near-complete archive of the issues of the Whole Earth Catalog.

Here lies a nearly-complete archive of Whole Earth publications, a series of journals and magazines descended from the Whole Earth Catalog, published by Stewart Brand and the POINT Foundation between 1968 and 2002. They are made available here for scholarship, education, and research purposes.

The info page also includes a quote from Stewart Brand:

“Dateline Oct 2023, Exactly 55 years ago, in 1968, the Whole Earth Catalog first came to life. Thanks to the work of an ongoing community of people, it prospered in various forms for 32 years—sundry editions of the Whole Earth Catalog, CoEvolution Quarterly, The WELL, the Whole Earth Software Catalog, Whole Earth Review, etc. Their impact in the world was considerable and sustained. Hundreds of people made that happen—staff, editors, major contributors, board members, funders, WELL conference hosts, etc. Meet them here.” —Stewart Brand

Brand’s mention of The WELL is particularly relevant here—he founded that pioneering online community in 1985 as a digital extension of the Whole Earth ethos, creating one of the internet’s first thriving social networks.

View of Earth against black space with large white serif text "Whole Earth Index" overlaid across the globe.

Whole Earth Index

Here lies a nearly-complete archive of Whole Earth publications, a series of journals and magazines descended from the Whole Earth Catalog, published by Stewart Brand and the POINT Foundation between 1968 and 2002.

wholeearth.info iconwholeearth.info

Huei-Hsin Wang at NN/g published a post about how to write better prompts for AI prompt-to-code tools.

When we asked AI-prototyping tools to generate a live-training profile page for NN/G course attendees, a detailed prompt yielded quality results resembling what a human designer created, whereas a vague prompt generated inconsistent and unpredictable outcomes across the board.

There’s a lot of detailing of what can often go wrong. Personally, I don’t need to read about what I experience daily, so the interesting bit for me is about two-thirds of the way into the article. Wang lists five strategies to employ to get better results.

  • Visual intent: Name the style precisely—use concrete design vocabulary or frameworks instead of vague adjectives. Anchor prompts with recognizable patterns so the model locks onto the look and structure, not “clean/modern” fluff.
  • Lightweight references: Drop in moodboards, screenshots, or system tokens to nudge aesthetics without pixel-pushing. Expect resemblance, not perfection; judge outcomes on hierarchy and clarity, not polish alone.
  • Text-led visual analysis: Have AI describe a reference page’s layout and style in natural language, then distill those characteristics into a tighter prompt. Combine with an image when possible to reinforce direction.
  • Mock data first: Provide realistic sample content or JSON so the layout respects information architecture. Content-driven prompts produce better grouping, hierarchy, and actionable UI than filler lorem ipsum.
  • Code snippets for precision: Attach component or layout code from your system or open-source libraries to reduce ambiguity. It’s the most exact context, but watch length; use selectively to frame structure.
Prompt to Design Interfaces: Why Vague Prompts Fail and How to Fix Them

Prompt to Design Interfaces: Why Vague Prompts Fail and How to Fix Them

Create better AI-prototyping designs by using precise visual keywords, references, analysis, as well as mock data and code snippets.

nngroup.com iconnngroup.com

On the heels of OpenAI’s report “The state of enterprise AI,” Anthropic published a blog post detailing research about how AI is being used by the employees building AI. The researchers surveyed 132 engineers and researchers, conducted 53 interviews, and looked at Claude usage data.

Our research reveals a workplace facing significant transformations: Engineers are getting a lot more done, becoming more “full-stack” (able to succeed at tasks beyond their normal expertise), accelerating their learning and iteration speed, and tackling previously-neglected tasks. This expansion in breadth also has people wondering about the trade-offs—some worry that this could mean losing deeper technical competence, or becoming less able to effectively supervise Claude’s outputs, while others embrace the opportunity to think more expansively and at a higher level. Some found that more AI collaboration meant they collaborated less with colleagues; some wondered if they might eventually automate themselves out of a job.

The post highlights several interesting patterns.

  • Employees say Claude now touches about 60% of their work and boosts output by roughly 50%.
  • Employees say that 27% of AI‑assisted tasks is work that wouldn’t have happened otherwise—like papercut fixes, tooling, and exploratory prototypes.
  • Engineers increasingly use it for new feature implementation and even design/planning.

Perhaps most provocative is career trajectory. Many engineers describe becoming managers of AI agents, taking accountability for fleets of instances and spending more time reviewing than writing net‑new code. Short‑term optimism meets long‑term uncertainty: productivity is up, ambition expands, but the profession’s future shape—levels of abstraction, required skills, and pathways for growth—remains unsettled. See also my series on the design talent crisis.

Two stylized black line-drawn hands over a white rectangle on a pale green background, suggesting typing.

How AI Is Transforming Work at Anthropic

How AI Is Transforming Work at Anthropic

anthropic.com iconanthropic.com