Putting aside the scale of research and development of AI technology there is probably no more important topic at present than Explainable AI, usually given the acronym XAI or xAI. The topic has crept into existence over the last five years but the launch of the IBM AI Fact Sheets 360 web site in mid-2020 seems to have catalyzed interest, research and solution development. Since that time I have added perhaps 50 or more papers on xAI to my collection, including one that raises the critical issue of ‘explainable to whom?” If this is a topic that you have not yet really looked at then a recent paper that is a combined effort of authors from IBM and Microsoft should be an essential read. (Free download). In their introduction the authors note
“With the increasing adoption of AI technologies, especially popular inscrutable “opaque-box” machine learning (ML) models such as neural networks models, understanding becomes increasingly difficult. Meanwhile, the need for stakeholders to understand AI is heightened by the uncertain nature of ML systems and the hazardous consequences they can possibly cause as AI is now frequently deployed in high-stakes domains such as healthcare, finance, transportation, and even criminal justice. Some are concerned that this challenge of understanding will become the bottleneck for people to trust and adopt AI technologies. Others have warned that a lack of human scrutiny will inevitably lead to failures in usability, reliability, safety, fairness, and other moral crises of AI”
The objective of the paper is set out in the title ‘Human-Centered Explainable AI (XAI): From Algorithms to User Experiences’ and this comes back to the issue of ‘explainable to whom’ I highlighted earlier. The paper is only 13 pages long but reviews over 100 papers, another metric of the level of concern (rather than just interest) in evolving a robust framework for a user-centred approach to the getting inside the black box. Quite a lot of progress has been made over the last year, for example the Microsoft AI Trust Score the XAI Handbook and the TrustyAI Explainable Toolkit.Choosing an AI-supported application without auditing the extent to which you are able to audit the AI routines is rather like buying a car but not asking if it is petrol, diesel, hydrid, electric or hydrogen. For the first 300 miles it probably makes little difference. Longer term….? I hope that makes the point.
The paper that I have used as the basis for this post is a very good place to start, but it is only a starting point. What isequally important is to establish what the principles, policies and priorities are for the acceptance of AI applications by your organisation. Without this governance framework in place you might just as well print out the paper and use it for origami practice. As well as links into IT technology governance there are implications for risk management, data privacy, workplace wellness and workplace training, to name but four. There are also some specific impacts on the management of enterprise search which I have discussed in two CMSWire columns in January and March. Maybe we are approaching the time of a CxAI Officer – the implications are such that the Board itself needs expert guidance.
Martin White