In this era of AI, we’ve been taught that LLMs are probabilistic, not deterministic, and that they will sometimes hallucinate. There’s a saying in AI circles that humans are right about 80% of the time, and so are AIs. Except when less than 100% accuracy is unacceptable. Accountants need to be 100% accurate, lest they lose track of money for their clients or businesses.
And that’s the problem Intuit had to solve to roll out their AI agent. Sean Michael Kerner, writing in VentureBeat:
Even when its accounting agent improved transaction categorization accuracy by 20 percentage points on average, they still received complaints about errors.
“The use cases that we’re trying to solve for customers include tax and finance; if you make a mistake in this world, you lose trust with customers in buckets and we only get it back in spoonfuls,” Joe Preston, Intuit’s VP of product and design, told VentureBeat.
So they built an agent that queries data from a multitude of sources and returns those exact results. But do users trust those results? It comes down to a design decision on being transparent:
Intuit has made explainability a core user experience across its AI agents. This goes beyond simply providing correct answers: It means showing users the reasoning behind automated decisions.
When Intuit’s accounting agent categorizes a transaction, it doesn’t just display the result; it shows the reasoning. This isn’t marketing copy about explainable AI, it’s actual UI displaying data points and logic.
“It’s about closing that trust loop and making sure customers understand the why,” Alastair Simpson, Intuit’s VP of design, told VentureBeat.

Intuit learned to build AI agents for finance the hard way: Trust lost in buckets, earned back in spoonfuls
The QuickBooks maker's approach to embedding AI agents reveals a critical lesson for enterprise AI adoption: in high-stakes domains like finance and tax, one mistake can erase months of user confidence.






















