As AI enters the workplace, staff won't always trust the decisions the software makes. It is a catch-22. If the AI's decision matches their own choice, AI doesn't add much value. If the AI comes to different conclusions, the users rarely accept the better result at face value. In other words, users of AI want answers to how the decision was made, rather than just proof the result was superior.
This is an area called explainable AI. Within the TNX platform, the software produces both the tendering strategy and a human-readable explanation. The explanation indicates what strategy is being followed, the expected outcome, and if this is exploring a new strategy or exploiting the best one. The business value in explaining smart tendering is that, left as a black-box, users may sabotage, overrule, or advocate to remove the platform out of distrust or misunderstanding.
Explainable AI accelerates the change management when first using TNX, and helps coach staff to better understand the market dynamics they are experiencing each day.