Matt Ström-Awn, writing on his personal site, picks up a three-year-old line from Ted Chiang and turns it inside out:
Three years ago, Ted Chiang described ChatGPT as a blurry JPEG of the web. LLMs are a lossy compression of their training data, which is itself a lossy sample of all the data available to it. But the artifacts we see in AI slop aren’t in the compression. They’re in the decompression.
Every AI-generated output is an extrapolation from that blurry source, vectored toward your prompt, filling in plausible detail where the compression threw information away. The output gets inflated into blog posts and LinkedIn thoughtspam, software platforms, omnichannel advertising campaigns, and movie cameos from dead actors. Chiang compared the gaps and confabulations to compression artifacts.
I think they’re expansion artifacts.
Chiang had the compression metaphor; what we needed was a word for what these tools do on the way back out, and Ström-Awn gave us one.
Ström-Awn lists what expansion artifacts look like across modalities:
- LLMs produce text stuffed with hedging verbs and fuzzing adjectives (delve, intricate, tapestry, multifaceted). Their paragraphs are structured as miniature essays with setup, payoff, and a signposted takeaway (This matters because…).
- AI-generated code over-comments the obvious and creates error handlers for operations that can’t logically fail.
- Image generators have had their own tells: six-fingered hands, symmetrical-but-stylistically-objectionable jewelry, text that looks like text but only if you cross your eyes.
- Video models struggle with continuity. Limbs appear and disappear, objects clip through each other, and physics sometimes just switches off.
Each of these artifacts is the training distribution leaking through where the model’s confidence runs thin.
Ström-Awn writes about the designer-specific tells too:
Power users of AI website generators (AI-pilled designers) already know how to recognize the tool marks, if only to try to prompt them away: purple gradients are an especially common tell. But as more and more non-designers use tools like Claude Design to prompt their way to fully-functional software products, I expect to see a preference for the aesthetic convergence endemic to the current crop of AI models.


