Buzz Usborne on what happens when AI takes on more responsibility in a product:
AI doesn’t simply make products smarter — it redistributes thinking and decision-making between humans and machines. When AI absorbs cognition, it also inherits responsibility. And when it inherits responsibility, the cost of its mistakes rises.
Usborne frames this through three forces that determine whether AI features survive or fail: trust, value perception, and cognitive effort. They amplify each other. Low trust increases perceived effort. High effort reduces perceived value. Low value further undermines trust.
His answer is to earn autonomy through interaction, not demand trust upfront:
Trust does not always need to precede adoption, it can emerge through usage. Salesforce’s findings show that “Human validation of outputs is the biggest driver in trusting the outcome, over consistently accurate outputs.” In other words, users trust systems they can interrogate, shape, and verify. And instead of designing AI products that are perfect, we can earn trust by designing experiences that are controllable.
Controllable over perfect.


