Skip to content

Ethan Mollick, a professor of entrepreneurship at the Wharton School says that AI has gotten so good that our relationship with them is changing. “We’re moving from partners to audience, from collaboration to conjuring,” he says.

He fed NotebookLM his book and 140 Substack posts and asked for a video overview. AI famously hallucinates. But Mollick found no factual errors in the six-minute video.

We’re shifting from being collaborators who shape the process to being supplicants who receive the output. It is a transition from working with a co-intelligence to working with a wizard. Magic gets done, but we don’t always know what to do with the results. This pattern — impressive output, opaque process — becomes even more pronounced with research tasks.

Mollick believes that the most wizard-like model today is GPT-5 Pro. He uploaded an academic paper that took him a year to write, which was peer-reviewed, and was then published in a major journal…

Nine minutes and forty seconds later, I had a very detailed critique. This wasn’t just editorial criticism, GPT-5 Pro apparently ran its own experiments using code to verify my results, including doing Monte Carlo analysis and re-interpreting the fixed effects in my statistical models. It had many suggestions as a result (though it fortunately concluded that “the headline claim [of my paper] survives scrutiny”), but one stood out. It found a small error, previously unnoticed. The error involved two different sets of numbers in two tables that were linked in ways I did not explicitly spell out in my paper. The AI found the minor error, no one ever had before.

Later in his post, Mollick says that there’s a problem with this wizardry—it’s too opaque. So what can we do?

First, learn when to summon the wizard versus when to work with AI as a co-intelligence or to not use AI at all. AI is far from perfect, and in areas where it still falls short, humans often succeed. But for the increasing number of tasks where AI is useful, co-intelligence, and the back-and-forth it requires, is often superior to a machine alone. Yet, there are, increasingly, times when summoning a wizard is best, and just trusting what it conjures.

Second, we need to become connoisseurs of output rather than process. We need to curate and select among the outputs the AI provides, but more than that, we need to work with AI enough to develop instincts for when it succeeds and when it fails.

And lastly, trust it. Trust the technology, he suggests. “The question isn’t ‘Is this completely correct?’ but ‘Is this useful enough for this purpose?’”

I think we’re in that transition period. AI is indeed dastardly great at some things and constantly getting better at the tasks it’s not. But we all know where this is headed.

Subscribe for updates

Get design insights in your inbox. Sent weekly (or so).