Skip to content

Our profession is changing rapidly. I’ve been covering that here for nearly a year now. Lots of posts come across my desk that say similar things. Tom Scott repeats a lot of what’s been said, but I’ll pull out a couple nuggets that caught my eye.

He declares that “Hands-on is the new default.” Quoting Vitor Amaral, a designer at Intercom:

Being craft-focused means staying hands-on, regardless of specialty or seniority. This won’t be a niche role, it will be an expectation for everyone, from individual contributors to VPs. The value lies in deeply understanding how things actually work, and that comes from direct involvement in the work.

As AI speeds up execution, the craft itself will become easier, but what will matter most is the critical judgment to craft the right thing, move fast, and push the boundaries of quality.

For those looking for work, Scott says, “You NEED to change how you find a job.” Quoting Felix Haas, investor and designer at Lovable:

Start building a real product and get a feeling for it what it means pushing something out in the market

Learn to use AI to prototype interactively → even at a basic level

Get comfortable with AI tools early → they’ll be your co-designer / sparring partner

Focus on solving real problems, not just making things look good (Which was a problem for very long in the design space)

Scott also says that “Design roles are merging,” and Ridd from Dive Club illustrates the point:

We are seeing a collapse of design’s monopoly on ideation where designers no longer “own” the early idea stage. PMs, engineers, and others are now prototyping directly with new tools.

If designers move too slow, others will fill the gap. The line between PM, engineer, and designer is thinner than ever. Anyone tool-savvy can spin up prototypes — which raises the bar for designers.

Impact comes from working prototypes, not just facilitation. Leading brainstorms or “owning process” isn’t enough. Real influence comes from putting tangible prototypes in front of the team and aligning everyone around them.

Design is still best positioned — but not guaranteed

Designers could lead this shift, but only if they step up. Ownership of ideation is earned, not assumed.

The future of product design

The future of product design

The future belongs to AI-native designers

verifiedinsider.substack.com iconverifiedinsider.substack.com

Is the AI bubble about to burst? Apparently, AI prompt-to-code tools like Lovable and v0 have peaked and are on their way down.

Alistair Barr writing for Business Insider:

The drop-off raises tough questions for startups that flaunted exponential annual recurring revenue growth just months ago. Analysts wrote that much of that revenue comes from month-to-month subscribers who may churn as quickly as they signed up, putting the durability of those flashy numbers in doubt.

Barr interviewed Eric Simons, CEO of Bolt who said:

“This is the problem across all these companies right now. The churn rate for everyone is really high,” Simons said. “You have to build a retentive business.”

AI vibe coding tools were supposed to change everything. Now traffic is crashing.

AI vibe coding tools were supposed to change everything. Now traffic is crashing.

Vibe coding tools have seen traffic drop, with Vercel’s v0 and Lovable seeing significant declines, raising sustainability questions, Barclays warns.

businessinsider.com iconbusinessinsider.com

If you’ve ever wondered why every version of Hokusai’s “The Great Wave off Kanagawa” feels just a little bit different, this video from the British Museum is a gem. It dives into the subtle variations across 111 known prints and shows how art, time, and technique all leave their mark.

Capucine Korenberg from the British Museum spent over 50 hours just staring at different versions of the print, joking “This is about the same amount of time you would spend brushing your teeth over two years. So, next time you brush your teeth just think of me looking at The Great Wave.”

Hokusai’s 'The Great Wave' (and the differences between all 111 of them)

Did you know there are 113 identified copies of Hokusai's The Great Wave. I know the title says 111, but scientist Capucine Korenberg found another 2 after completing her research. What research was that? Finding every print of The Great Wave around the world and then sequencing them, to find out when they were created during the life cycle of the woodblocks they were printed from.

youtube.com iconyoutube.com

I love this framing by Patrizia Bertini:

Let me offer a different provocation: AI is not coming for your job. It is coming for your tasks. And if you cannot distinguish between the two, then yes — you should be worried. Going further, she distinguishes between output and outcome: Output is what a process produces. Code. Copy. Designs. Legal briefs. Medical recommendations. Outputs are the tangible results of a system executing its programmed or prescribed function — the direct product of following steps, rules, or algorithms. The term emerged in the industrial era, literally describing the quantity of coal or iron a mine could extract in a given period. Output depends entirely on the efficiency and capability of the process that generates it.

Outcome is what happens when that output meets reality. An outcome requires context, interpretation, application, and crucially — intentionality. Outcomes demand understanding not just what was produced, but why it matters, who it affects, and what consequences ripple from it. Where outputs measure productivity, outcomes measure impact. They are the ultimate change or consequence that results from applying an output with purpose and judgment.

She argues that, “AI can generate outputs. It cannot, however, create outcomes.”

This reminds me of a recent thread by engineer Marc Love:

It’s insane just how much how I work has changed in the last 18 months.

I almost never hand write code anymore except when giving examples during planning conversations with LLMs.

I build multiple full features per day , each of which would’ve taken me a week or more to hand write. Building full drafts and discarding them is basically free.

Well over half of my day is spent ideating, doing systems design, and deciding what and what not to build.

It’s still conceptually the same job, but if i list out the specific things i do in a day versus 18 months ago, it’s almost completely different.

Care about the outcome, not the output.

preview-1759425572315-1200x533.png

When machines make outputs, humans must own outcomes

The future of work in the age of AI and deepware.

uxdesign.cc iconuxdesign.cc

In an announcement to users this morning, Visual Electric said they were being acquired by Perplexity—or more accurately, the team that makes Visual Electric will be hired by Perplexity. The service will shut down in the next 90 days.

Today we’re sharing the next step in Visual Electric’s journey: we’ve been acquired by Perplexity. This is a milestone that marks both an exciting opportunity for our team and some big changes for our product.

Over the next 90 days we’ll be sunsetting Visual Electric, and our team will be forming a new Agent Experiences group at Perplexity.

While we’ve seen acquihires and shutdowns in either the AI infrastructure space (e.g., Scale AI) or coding space (e.g., Windsurf), I don’t believe we’ve seen one in the image or video gen AI space have an exit event like this yet. Obviously, The Browser Company announced their acquisition by Atlassian last month.

I believe building gen AI tools at this moment is incredibly competitive. I think it takes an even stronger stomached entrepreneur than in the pre-ChatGPT moment. So kudos for the folks at Visual Electric for having a good outcome and getting to continue to do their work at Perplexity. But I do think this is not the last that we’ll see consolidation in this space.

preview-1759365436869-1200x630.png

Visual Electric is Joining Perplexity

Today we’re sharing the next step in Visual Electric’s journey: we’ve been acquired by Perplexity. This is a milestone that marks both an exciting opportunity for our team and some big changes for our product.

visualelectric.com iconvisualelectric.com

Tim Berners-Lee, the father of the web who gave away the technology for free, says that we are at an inflection point with data privacy and AI. But before he makes that point, he reminds us that we are the product:

Today, I look at my invention and I am forced to ask: is the web still free today? No, not all of it. We see a handful of large platforms harvesting users’ private data to share with commercial brokers or even repressive governments. We see ubiquitous algorithms that are addictive by design and damaging to our teenagers’ mental health. Trading personal data for use certainly does not fit with my vision for a free web.

On many platforms, we are no longer the customers, but instead have become the product. Our data, even if anonymised, is sold on to actors we never intended it to reach, who can then target us with content and advertising. This includes deliberately harmful content that leads to real-world violence, spreads misinformation, wreaks havoc on our psychological wellbeing and seeks to undermine social cohesion.

And about that fork in the road with AI:

In 2017, I wrote a thought experiment about an AI that works for you. I called it Charlie. Charlie works for you like your doctor or your lawyer, bound by law, regulation and codes of conduct. Why can’t the same frameworks be adopted for AI? We have learned from social media that power rests with the monopolies who control and harvest personal data. We can’t let the same thing happen with AI.

preview-1759201284501.jpg

Why I gave the world wide web away for free

My vision was based on sharing, not exploitation – and here’s why it’s still worth fighting for

theguardian.com icontheguardian.com

In my most recent post, I called out our design profession, for our part in developing these addictive products. Jeffrey Inscho, brings it back up to the tech industry at large and observes they’re actually publishers:

The executives at these companies will tell you they’re neutral platforms, that they don’t choose what content gets seen. This is a lie. Every algorithmic recommendation is an editorial decision. When YouTube’s algorithm suggests increasingly extreme political content to keep someone watching, that’s editorial. When Facebook’s algorithm amplifies posts that generate angry reactions, that’s editorial. When Twitter’s trending algorithms surface conspiracy theories, that’s editorial.

They are publishers. They have always been publishers. They just don’t want the responsibility that comes with being publishers.

His point is that if these social media platforms are sorting and promoting posts, it’s an editorial approach and they should be treated like newspapers. “It’s like a newspaper publisher claiming they’re not responsible for what appears on their front page because they didn’t write the articles themselves.”

The answer, Inscho argues, is regulation of the algorithms.

Turn Off the Internet

Big tech has built machines designed for one thing: to hold …

staticmade.com iconstaticmade.com

When I read this, I thought to myself, “Geez, this is what a designer does.” I think there is a lot of overlap between what we do as product designers and what product managers do. One critical one—in my opinion, and why we’re calling ourselves product designers—is product sense. Product sense is the skill of finding real user needs and creating solutions that have impact.

So I think people can read this with two lenses:

  • If you’re a designer who executes the assignments you’re given, jumping into Figma right away, read this to be more well-rounded and understand the why of what you’re making.
  • If you’re a designer who spends 80% of your time questioning everything and defining the problem, and only 20% of your time in Figma, read this to see how much overlap you actually have with a PM.

BTW, if you’re in the first bucket, I highly encourage you to gain the skills necessary to migrate to the second bucket.

While designers often stay on top of visual design trends or the latest best practices from NNG, Jules Walter suggests an even wider aperture. Writing in Lenny’s Newsletter:

Another practice for developing creativity is to spend time learning about emerging trends in technology, society, and regulations. Changes in the industry create opportunities for launching new products that can address user needs in new ways. As a PM, you want to understand what’s possible in your domain in order to come up with creative solutions.

preview-1758776009017.png

How to develop product sense

Jules Walter shares a ton of actionable and practical advice to develop your product sense, explains what product sense is, how to know if you’re getting better,

lennysnewsletter.com iconlennysnewsletter.com

The headline rings true to me because that’s what I look for in designers and how I run my team. The software that we build is too complex and too mission-critical for designers to vibe-code—at least given today’s tooling. But each one of the designers on my team can fill in for a PM when they’re on vacation.

Kai Wong, writing in UX Collective:

One thing I’ve learned, talking with 15 design leaders (and one CEO), is that a ‘designer who codes’ may look appealing, but a ‘designer who understands business’ is far more valuable and more challenging to replace.

You already possess the core skill that makes this transition possible: the ability to understand users with systematic observation and thoughtful questioning.

The only difference, now, is learning to apply that same methodology to understand your business.

Strategic thinking doesn’t require fancy degrees (although it may sometimes help).

Ask strategic questions about business goals. Understand how to balance user and business needs. Frame your design decisions in terms of measurable business impact.

preview-1758775414784.png

Why many employers want Designers to think like PMs, not Devs

How asking questions, which used to annoy teams, is now critical to UX’s future

uxdesign.cc iconuxdesign.cc

I’m happy that the conversation around the design talent crisis continues. Carly Ayres, writing for It’s Nice That picks up the torch and speaks to designers and educators about this topic. What struck me—and I think what adds to the dialogue—is the notion of the belief gap. Ayres spoke with Naheel Jawaid, founder of Silicon Valley School of Design, about it:

“A big part of what I do is just being a coach, helping someone see their potential when they don’t see it yet,” Naheel says. “I’ve had people tell me later that a single conversation changed how they saw themselves.”

In the past, belief capital came from senior designers taking juniors under their wing. Today, those same seniors are managing instability of their own. “It’s a bit of a ‘dog eat dog world’-type vibe,” Naheel says. “It’s really hard to get mentorship right now.”

The whole piece is great. Tighter than my sprawling three-parter. I do think there’s a piece missing though. While Ayres highlights the issue and offers suggestions from designer leaders, businesses need to step up and do something about the issue—i.e., hire more juniors. Us recognizing it is the first step.

preview-1758774248502.png

Welcome to the entry-level void: what happens when junior design jobs disappear?

Entry-level jobs are disappearing. In their place: unpaid gigs, cold DMs and self-starters scrambling for a foothold. The ladder’s gone – what’s replacing it, and who’s being left behind?

itsnicethat.com iconitsnicethat.com

As much as I defended the preview, and as much as Apple wants to make Liquid Glass a thing, the new UI is continuing to draw criticism. Dan Moren for Six Colors:

“Glass” is the overall look of these updates, and it’s everywhere. Transparent, frosted, distorting. In some places it looks quite cool, such as in the edge distortion when you’re swiping up on the lock screen. But elsewhere, it seems to me that glass may not be quite the right material for the job. The Glass House might be architecturally impressive, but it’s not particularly practical.

It’s also a definite philosophical choice, and one that’s going to engender some criticism—much of it well-deserved. Apple has argued that it’s about getting controls out of the way, but is that really what’s happening here? It’s hard to argue that having a transparent button sitting right on top of your email is helping that email be more prominent. To take this argument to its logical conclusion, why is the keyboard not fully transparent glass over our content?

I’ve yet to upgrade myself. I will say that everyone dislikes change. Lest we forget that the now-ubiquitous flat design introduced by iOS 7 was also criticized.

preview-1758732622764.png

iOS 26 Review: Through a glass, liquidly

iOS 26! It feels like just last year we were here discussing iOS 18. How time flies. After a year that saw the debut of Apple Intelligence and the subsequent controversy over the features that it d…

sixcolors.com iconsixcolors.com

Jason Spielman put up a case study on his site for his work on Google’s NotebookLM:

The mental model of NotebookLM was built around the creation journey: starting with inputs, moving through conversation, and ending with outputs. Users bring in their sources (documents, notes, references), then interact with them through chat by asking questions, clarifying, and synthesizing before transforming those insights into structured outputs like notes, study guides, and Audio Overviews.

And yes, he includes a sketch he did on the back of a napkin.

I’ve always wondered about the UX of NotebookLM. It’s not typical and, if I’m being honest, not exactly super intuitive. But after a while, it does make sense. Maybe I’m the outlier though, because Spielman’s grandmother found it easy. In an interview last year on Sequoia Capital’s Training Data, he recalls:

I actually do think part of the explosion of audio overviews was the fact it was a simple one click experience. I was on the phone with my grandma trying to explain her how to use it and it actually didn’t take any explanation. I’m like, “Drop in a source.” And she’s like, “Oh! I see. I click this button to generate it.” And I think that the ease of creation is really actually what catalyzed so much explosion. So I think when we think about adding these knobs [for customization] I think we want to do it in a way that’s very intentional.

preview-1758507696745.png

Designing NotebookLM

Designer, builder, and visual storyteller. Now building Huxe. Previously led design on NotebookLM and contributed to Google AI projects like Gemini and Search. Also shoot photo/video for brands like Coachella, GoPro, and Rivian.

jasonspielman.com iconjasonspielman.com

Chatboxes have become the uber box for all things AI. The criticism of this blank box has been the cold start issue. New users don’t know what to type. Designers shipping these product mostly got around this problem by offering suggested prompts to teach users about the possibilities.

The issue on the other end is that expert users end up creating their own library of prompts to copy and paste into the chatbox for repetitive tasks.

Sharang Sharma writing in UX Collective illustrates how these UIs can be smarter by being predictive of intent:

Contrary, Predictive UX points to an alternate approach. Instead of waiting for users to articulate every step, systems can anticipate intent based on behavior or common patterns as the user types. Apple Reminders suggests likely tasks as you type. Grammarly predicts errors and offers corrections inline. Gmail’s Smart Compose even predicts full phrases, reducing the friction of drafting entirely.

Sharma says that the goal of predictive UX is to “reduce time-to-value and reframe AI as an adaptive partner that anticipates user’s intent as you type.”

Imagine a little widget that appears within the chatbox as you type. Kind of a cool idea.

preview-1758077109263.jpeg

How can AI UI capture intent?

Exploring contextual prompt patterns that capture user intent as it is typed

uxdesign.cc iconuxdesign.cc

Ah, this brings back memories! I spent so much time in MacPaint working with these patterns when I was young. Paul Smith faithfully recreates them:

I was working on something and thought it would be fun to use one of the classic Mac black-and-white patterns in the project. I’m talking about the original 8×8-pixel ones that were in the original Control Panel for setting the desktop background and in MacPaint as fill patterns.

I figured there’d must be clean, pixel-perfect GIFs or PNGs of them somewhere on the web. And perhaps there are, but after poking around a bit, I ran out of energy for that, but by then had a head of steam for extracting the patterns en masse from the original source, somehow. Then I could produce whatever format I needed for them.

preview-1757693571067.png

Classic 8×8-pixel B&W Mac patterns

TL;DR: I made a website for the original classic Mac patterns I was working on something and thought it would be fun to use one of the classic Mac black-and-white patterns in the project. I'm talking about the original 8×8-pixel ones that were in the...

pauladamsmith.com iconpauladamsmith.com

Thinking about this morning’s link about web forms, if you abstract why it’s so powerful, you get to the point of human-computer interaction: the computer should do what the user intends, not the buttons they push.

Matt Webb reminds us about the DWIM, or Do What I Mean philosophy in computing that was coined by Warren Teitelman in 1966. Webb quotes computer scientist Larry Masinter:

DWIM is an embodiment of the idea that the user is interacting with an agent who attempts to interpret the user’s request from contextual information. Since we want the user to feel that he is conversing with the system, he should not be stopped and forced to correct himself or give additional information in situations where the correction or information is obvious.

Webb goes on to say:

Squint and you can see ChatGPT as a DWIM UI: it never, never, never says “syntax error.”

Now, arguably it should come back and ask for clarifications more often, and in particular DWIM (and AI) interfaces are more successful the more they have access to the user’s context (current situation, history, environment, etc).

But it’s a starting point. The algo is: design for capturing intent and then DWIM; iterate until that works. AI unlocks that.

preview-1757558679383.png

The destination for AI interfaces is Do What I Mean

Posted on Friday 29 Aug 2025. 840 words, 10 links. By Matt Webb.

interconnected.org iconinterconnected.org

Forms is one of the fundamental things we make users do in software. Whether it’s the login screen, billing address form, or a mortgage application, forms are the main method for getting data from users and into computer-accessible databases. The human is deciding what piece of information to put into which column in the database. With AI, form filling should be much simpler.

Luke Wroblewski makes the argument:

With Web forms, the burden is on people to adapt to databases. Today’s AI models, however, can flip this requirement. That is, they allow people to provide information in whatever form they like and use AI do the work necessary to put that information into the right structure for a database.

How can it work?

With AgentDB connected to an AI model (via an MCP server), a person can simply say “add this” and provide an image, PDF, audio, video, you name it. The model will use AgentDB’s template to decide what information to extract from this unstructured input and how to format it for the database. In the case where something is missing or incomplete, the model can ask for clarification or use tools (like search) to find possible answers.

preview-1757557969255.png

Unstructured Input in AI Apps Instead of Web Forms

Web forms exist to put information from people into databases. The input fields and formatting rules in online forms are there to make sure the information fits...

lukew.com iconlukew.com

I believe purity tests of any sort are problematic. And it’s much too easy to throw around the “This is AI slop!” claim. AI was used in the main title sequence for the Marvel TV show Secret Invasion. But it was on purpose and aligned with the show’s themes of shapeshifters.

Anyway, Daniel John, writing in the Creative Bloq:

[Lady] Gaga just dropped the music video for The Dead Dance, a song debuted in Season 2 of Netflix’s Wednesday. Directed by Tim Burton, it’s a suitably nightmarish black-and-white cacophony of monsters and dolls. But some are already claiming that parts of it were made using AI.

John shows a tweet from @graveyardquy as an example:

i didn’t think we’d ever be in a timeline where a tim burton x lady gaga collab would turn out to be AI slop… but here we are

We need to separate quality critiques from tool usage. If it looks good and is appropriate, I’m fine with CG, AI, and whatever comes next that helps tell the story. Same goes for what we do as designers, ’natch.

Gaga’s song is great. It’s a bop, as the kids say, with a neat music video to boot.

preview-1757379113823.jpg

The Lady Gaga backlash proves AI paranoia has gone too far

Just because it looks odd, doesn't mean it's AI.

creativebloq.com iconcreativebloq.com

Brad Frost, of atomic design fame, wrote a history of themeable UIs as part of a deep dive into design tokens. He writes, “Design tokens may be the latest incarnation, but software creators have been creating themeable user interfaces for quite a long time!”

About Mario and Luigi from Super Mario Bros.:

It’s wild that two of the most iconic characters in the history of pop culture — red-clad Mario and green-clad Luigi — are themeable UI elements born from pragmatic ingenuity to overcome technological challenges. Freaking amazing.

The History of Themeable User Interfaces

The History of Themeable User Interfaces

A full-ish history of user interfaces that can be themed to meet the opportunities and constraints of the time

bradfrost.com iconbradfrost.com

Here’s a fun visual essay about a artist Yufeng Zhao’s piece “Alt Text in NYC.” It’s a essentially a visual search engine that searches all the text (words) on the streets of New York City. The dataset comprises of over eight million photos from Google Street View! Matt Daniels, writing for The Pudding:

The result is a search engine of much of what’s written in NYC’s streets. It’s limited to what a Google Street View car can capture, so it excludes text in areas such as alleyways and parks, or any writing too small to be read by a moving vehicle.

The scale of the data is immense: over 8 million Google Street View images (from the past 18 years) and 138 million identified snippets of text.

Just over halfway down the article, there is a list of the top 1,000 words in the data. Most are expected words from traffic signs like “stop.” But number twenty-five is “Fedders,” the logo of an air-conditioner brand popular in the 1950s to the 1990s. They’re all over the exteriors of the city’s buildings.

Best viewed on your computer, IMHO.

preview-1757042735943.jpg

NYC’s Urban Textscape

Analyzing All of the Words Found on NYC Streets

pudding.cool iconpudding.cool

Josh Miller, CEO, and Hursh Agrawal, CTO, of The Browser Company:

Today, The Browser Company of New York is entering into an agreement to be acquired by Atlassian in an all-cash transaction. We will operate independently, with Dia as our focus. Our objective is to bring Dia to the masses.

Super interesting acquisition here. There is zero overlap as far as I can tell. Atlassian’s move is out of left-field. Dia’s early users were college students. The Browser Company more recently opened it up to former Arc users. Is this bet for Atlassian—the company that makes tech-company-focused products like Jira and Confluence—around the future of work and collaboration? Is this their first move against Salesforce? 🤔

preview-1757007229906.jpeg

Your Tuesday in 2030

Or why The Browser Company is being acquired to bring Dia to the masses.

open.substack.com iconopen.substack.com

DOC is a publication from Fabricio Teixeira and Caio Braga that I’ve linked to before. Their latest reflection is on interfaces.

A good user interface is a good conversation.

Interfaces thrive on clarity, responsiveness, and mutual understanding. In a productive dialogue, each party clearly articulates their intentions and receives timely, understandable responses. Just as a good conversationalist anticipates the next question or need, a good interface guides you smoothly through your task. At their core, interfaces translate intent into action. They’re a bridge between what’s in your head and what the product can do.

Reflection is the best word I’ve found to describe these pieces. They’re hype-free, urging us to take a step back, and—at least for me—a reminder about our why.

In the end, interfaces are also a space for self-expression.

The ideal of “no interface” promises ultimate efficiency and direct access—but what do we lose in that pursuit? Perhaps the interface is not just a barrier to be minimized, but a space for human expression. It’s a canvas; a place to imbue a product with personality, visual expression, and a unique form of art.

When we strip that away, or make everything look the same, we lose something important. We trade the unique and the delightful for the purely functional. We sacrifice a vital part of what makes technology human: the thoughtful, and sometimes imperfect, ways we present ourselves to the world.

A pixelated hand

DOC • Interface

On connection, multi-modality, and self-expression.

doc.cc icondoc.cc

Hard to believe that the Domino’s Pizza tracker debuted in 2008. The moment was ripe for them—about a year after the debut of the iPhone. Mobile e-commerce was in its early days.

Alex Mayyasi for The Hustle:

…the tracker’s creation was spurred by the insight that online orders were more profitable – and made customers more satisfied – than phone or in-person orders. The company’s push to increase digital sales from 20% to 50% of its business led to new ways to order (via a tweet, for example) and then a new way for customers to track their order.

Mayyasi weaves together a tale of business transparency, UI, and content design, tracing—or tracking?—the tracker’s impact on business since then. “The pizza tracker is essentially a progress bar.” But progress bars do so much for the user experience, most of which is setting proper expectations.

preview-1756791507284.png

How the Domino’s pizza tracker conquered the business world

One cheesy progress update at a time.

thehustle.co iconthehustle.co

Here’s a fun project from Étienne Fortier-Dubois. It is both a timeline of tech innovations throughout history and a family tree. For example, the invention of the wheel led to chariots, or the ancestors of the bulletin board system were the home computer and the modem. From the about page:

The historical tech tree is a project by Étienne Fortier-Dubois to visualize the entire history of technologies, inventions, and (some) discoveries, from prehistory to today. Unlike other visualizations of the sort, the tree emphasizes the connections between technologies: prerequisites, improvements, inspirations, and so on.

These connections allow viewers to understand how technologies came about, at least to some degree, thus revealing the entire history in more detail than a simple timeline, and with more breadth than most historical narratives. The goal is not to predict future technology, except in the weak sense that knowing history can help form a better model of the world. Rather, the point of the tree is to create an easy way to explore the history of technology, discover unexpected patterns and connections, and generally make the complexity of modern tech feel less daunting.

preview-1756485191427.png

Historical Tech Tree

Interactive visualization of technological history

historicaltechtree.com iconhistoricaltechtree.com

I have always wanted to read 6,200 words about color! Sorry, that’s a lie. But I did skim it and really admired the very pretty illustrations. Dan Hollick is a saint for writing and illustrating this chapter in his living book called Making Software, a reference manual for designers and programmers that make digital products. From his newsletter:

I started writing this chapter just trying to explain what a color space is. But it turns out, you can’t really do that without explaining a lot of other stuff at the same time.

Part of the issue is color is really complicated and full of confusing terms that need a maths degree to understand. Gamuts, color models, perceptual uniformity, gamma etc. I don’t have a maths degree but I do have something better: I’m really stubborn.

And here are the opening sentences of the chapter on color:

Color is an unreasonably complex topic. Just when you think you’ve got it figured out, it reveals a whole new layer of complexity that you didn’t know existed.

This is partly because it doesn’t really exist. Sure, there are different wavelengths of light that our eyes perceive as color, but that doesn’t mean that color is actually a property of that light - it’s a phenomenon of our perception.

Digital color is about trying to map this complex interplay of light and perception into a format that computers can understand and screens can display. And it’s a miracle that any of it works at all.

I’m just waiting for him to put up a Stripe link so I can throw money at him.

preview-1756359522301.jpg

Making Software: What is a color space?

In which we answer every question you've ever had about digital color, and some you haven't.

makingsoftware.com iconmakingsoftware.com

Interesting piece from Vaughn Tan about a critical thinking framework that is disguised as a piece about building better AI UIs for critical thinking. Sorry, that sentence is kind of a tongue-twister. Tan calls out—correctly—that LLMs don’t think, or in his words, can’t make meaning:

Meaningmaking is making inherently subjective decisions about what’s valuable: what’s desirable or undesirable, what’s right or wrong. The machines behind the prompt box are remarkable tools, but they’re not meaningmaking entities.

Therefore when users ask LLMs for their opinions on matters, e.g., as in the therapy use case, the AIs won’t come back with actual thinking. IMHO, it’s semantics, but that’s another post.

Anyhow, Tan shares a pen and paper prototype he’s been testing, which breaks down a major decision into guided steps, or put another way, a framework.

This user experience was designed to simulate a multi-stage process of structured elicitation of various aspects of strongly reasoned arguments. This design explicitly addresses both requirements for good tool use. The structured prompts helped students think critically about what they were actually trying to accomplish with their custom major proposals — the meaningmaking work of determining value, worth, and personal fit. Simultaneously, the framework made clear what kinds of thinking work the students needed to do themselves versus what kinds of information gathering and analysis could potentially be supported by tools like LLMs.

This guided or framework-driven approach was something I attempted wtih Griffin AI. Via a series of AI-guided prompts to the user—or a glorified form, honestly—my tool helped users build brand strategies. To be sure, a lot of the “thinking” was done by the model, but the idea that an AI can guide you to critically think about your business or your client’s business was there.

preview-1756270668809.png

Designing AI tools that support critical thinking

Current AI interfaces lull us into thinking we’re talking to something that can make meaningful judgments about what’s valuable. We’re not — we’re using tools that are tremendously powerful but nonetheless can’t do “meaningmaking” work (the work of deciding what matters, what’s worth pursuing).

vaughntan.org iconvaughntan.org