Interesting piece from Vaughn Tan about a critical thinking framework that is disguised as a piece about building better AI UIs for critical thinking. Sorry, that sentence is kind of a tongue-twister. Tan calls out—correctly—that LLMs don’t think, or in his words, can’t make meaning:
Meaningmaking is making inherently subjective decisions about what’s valuable: what’s desirable or undesirable, what’s right or wrong. The machines behind the prompt box are remarkable tools, but they’re not meaningmaking entities.
Therefore when users ask LLMs for their opinions on matters, e.g., as in the therapy use case, the AIs won’t come back with actual thinking. IMHO, it’s semantics, but that’s another post.
Anyhow, Tan shares a pen and paper prototype he’s been testing, which breaks down a major decision into guided steps, or put another way, a framework.
This user experience was designed to simulate a multi-stage process of structured elicitation of various aspects of strongly reasoned arguments. This design explicitly addresses both requirements for good tool use. The structured prompts helped students think critically about what they were actually trying to accomplish with their custom major proposals — the meaningmaking work of determining value, worth, and personal fit. Simultaneously, the framework made clear what kinds of thinking work the students needed to do themselves versus what kinds of information gathering and analysis could potentially be supported by tools like LLMs.
This guided or framework-driven approach was something I attempted wtih Griffin AI. Via a series of AI-guided prompts to the user—or a glorified form, honestly—my tool helped users build brand strategies. To be sure, a lot of the “thinking” was done by the model, but the idea that an AI can guide you to critically think about your business or your client’s business was there.

Designing AI tools that support critical thinking
Current AI interfaces lull us into thinking we’re talking to something that can make meaningful judgments about what’s valuable. We’re not — we’re using tools that are tremendously powerful but nonetheless can’t do “meaningmaking” work (the work of deciding what matters, what’s worth pursuing).