Over the last few years there has been a very significant increase in the attention being paid to explainable AI, partly as an outcome of new regulations that have been proposed by the European Union. One of the earliest papers (2017) on the topic was authored by Finale Doshi-Velez and Been Kim from Harvard University. (It is well worth watching a TedX talk by Finale on AI explainability). Since then there has been an avalanche of reports and research papers, including an excellent perspective from Julie Gerlings, Millie Søndergaard Jensen and Arisa Shollo entitled Explainable AI, but explainable to whom? which is a fundamental question that has to be addressed. Applications such as Alibi Explain are beginning to emerge as audit tools for explainability.
However as observed by Sebastian Palacio and his colleagues in their proposal for an xAI Handbook
“A silent, recurrent and acknowledged issue in this area is the lack of consensus regarding its terminology. In particular, each new contribution seems to rely on its own (and often intuitive) version of terms like “explanation” and “interpretation”. Such disarray encumbers the consolidation of advances in the field towards the fulfillment of scientific and regulatory demands e.g., when comparing methods or establishing their compliance with respect to biases and fairness constraints”
An underlying concern with AI systems is to ensure that users trust the behaviour of the systems to be in their best interests. Probably nowhere is this need for trust more important than in enterprise search applications, where a failure to find information could prejudice the reputation of the enterprise and also of the individual employee. In all other search applications there is always a work-around. If information cannot be found on PubMed then we could use Scopus. There is no workaround in enterprise search.
At present hardly a week goes by without a current or new entrant to the enterprise search market offering a plethora of AI/ML based applications which promise perfect solutions to all known and unknown search challenges. When you dig further into the vendor web sites two issues are very obvious by their absence. First, there is no reference to how the ML models will be tested and tuned for the circumstances of an individual enterprise where the content quality and consistency is very variable.
Second, there is no reference to any policy by the vendor about the extent of their commitment to explaining how their ML applications work, how they can be audited and managed and how they can be modified in the light of experience.
Of course, enterprise search is not the only AI-supported application in the enterprise, and companies are establishing guidelines on xAI for their procurement due diligence, cognizant of the direction of travel of the EU in this regard. It is important to appreciate that EU regulations would have extraterritorial reach, meaning that any AI system providing output within the European Union would be subject to it, regardless of where the provider or user is located.
With any IT technology, and especially AI/ML search, there are opportunities and risks. Search vendors seem to be fixated just on presenting the benefits. They need to be ready and open to discuss the risks that potential customers, driven by corporate xAI protocols, will wish to discuss very early in a procurement project. Indeed, given a choice they may prefer to limit their initial discussions to those vendors who have a visible and sensible commitment to xAI.