Skip to content

Definitely use AI at work if you can. You’d be guilty of professional negligence if you don’t. But, you must not blindly take output from ChatGPT, Claude, or Gemini and use it as-is. You have to check it, verify that it’s free from hallucinations, and applicable to the task at hand. Otherwise, you’ll generate “workslop.”

Kate Niederhoffer, Gabriella Rosen Kellerman, et. al., in Harvard Business Review, report on a study by Stanford Social Media Lab and BetterUp Labs. They write, “Employees are using AI tools to create low-effort, passable looking work that ends up creating more work for their coworkers.”

Here’s how this happens. As AI tools become more accessible, workers are increasingly able to quickly produce polished output: well-formatted slides, long, structured reports, seemingly articulate summaries of academic papers by non-experts, and usable code. But while some employees are using this ability to polish good work, others use it to create content that is actually unhelpful, incomplete, or missing crucial context about the project at hand. The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver.

Don’t be like this. Use it to do better work, not to turn in mediocre work.

Workslop may feel effortless to create but exacts a toll on the organization. What a sender perceives as a loophole becomes a hole the recipient needs to dig out of. Leaders will do best to model thoughtful AI use that has purpose and intention. Set clear guardrails for your teams around norms and acceptable use. Frame AI as a collaborative tool, not a shortcut. Embody a pilot mindset, with high agency and optimism, using AI to accelerate specific outcomes with specific usage. And uphold the same standards of excellence for work done by bionic human-AI duos as by humans alone.

Subscribe for updates

Get design insights in your inbox. Sent weekly (or so).