The success of AI interfaces hinges not just on the quality of AI suggestions, but on how users interact with them. We discovered that creating the right level of user engagement was crucial for building trust and ensuring accuracy.
Our key challenge was creating a journey with just the right amount of friction. Too little friction made the experience slippery, causing users to make mistakes by rushing through AI suggestions. While too much friction might turn AI assistance into a chore.
Omny enables content improvement through AI using these 2 steps:
Our initial design prioritized efficiency:
Despite our AI's 90% accuracy rate, this approach revealed unexpected problems:
This is because we were pre-populating our suggestions as input for that step in the wizard and the user could just click next without having to read the AI suggestions.
User interviews revealed an ironic twist: by making the process too smooth, we'd actually undermined users' confidence in the system.
This revealed a fundamental tension between convenience and engagement. Maximum efficiency (instant AI suggestions) conflicted with maximum accuracy (careful user review).
The original interface had another unintended consequence: users held Omny accountable for suggestions they hadn't properly reviewed.
The solution lay in reframing the AI-human relationship. Instead of presenting AI outputs as decisions, we redesigned the interface to encourage collaboration. This would hopefully, transfer the onus of selecting the right benefits and use cases from the AI to the user, which is how it’s supposed to be anyway.
This new design created better awareness of how they were using AI suggestions. Users became more forgiving of occasional AI inaccuracies because they were now active participants in the selection process. It also created stronger trust in AI’s capabilities while also improving AI-human collaboration.