Personalised search – an ethical perspective

Personalised search – an ethical perspective

by | Mar 20, 2018 | Digital workplace, Information management, Search

I have commented recently on both personalised and exploratory search, and on some of the topics that were covered at an ISKOUK seminar on the ethics of information. This time around I want to bring these topics together. We are now well used to the fact that Google, Bing and other search sites make use of a range of associated data on our searching habits, location and much more to target the results we receive from a search. Amazon was probably the first to do it at scale, though having selected a book I find I’m more interested in the people who purchased books seemingly totally different from my own selection. In the case of these public search sites there are two important issues to consider. The first is that if you don’t like the Google search bias you can run the search in Bing or DuckDuckGo or any number of other sites. The second is that you are probably not going to make a life-changing decision on the basis of a single search.

Inside the enterprise things are totally different. The first is that there is (in general) only one search application across core business-critical repositories. The second is that if you make a decision based on the personalised search experience promised by the cognitive search community and that decision is not the correct decision it could be career-changing in all the wrong ways. The question I want to raise is whether limiting the scope of the search by incorporating factors which are invisible to the employee (and that they cannot verify or change) is ethical. It is presenting a ‘truth’ without there being any way of verifying the truth. In the public space we have options, not just from using different search sites but interrogating individual web sites. Take weather information. As I write this post the BBC Weather Website tells me there is heavy snow in Philadelphia. On the freeway cameras there is no snow in sight and flights are taking off on time from PHL.

Back inside my company office in London I might be looking for sales by customer in Saudi Arabia. The search application provides a personalized table (results are so “old search”!) but it is not clear if the numbers are in Riyals, £ Sterling, or $US, because I work for a multinational organisation and I don’t know where the sales are consolidated. What I also don’t know is whether I am not seeing the full picture because my security clearance or office location might mean that sales to Saudi government agencies are not included. Security trimming I can probably sort out because I’m aware that it might be happening and I can call a friend who has a higher security clearance to check. That’s what friends are for – workarounds for security trimming! The core question is whether I trust the information, and the greater the degree of ‘personalisation’ that I experience (and it doesn’t take long to work out what the main rules are of the game) the less I trust the results. This is especially the case if just the information is presented (a table of customer revenues) without the surrounding report which will almost certainly give me the context I need to make an informed decision.

Many search vendors are selling the nirvana of making it much easier to find highly relevant information through a personalised search that removes extraneous information. Are employees aware of the extent to which search has been personalised?  Who is making the judgements on the filters, and is there a way in which an individual can verify their personal filter profile, and if needed challenge it within the compliance structure of the organisation? What if the technology does not work correctly – will the vendor accept a case for damages if the organisation makes an inadvertently uninformed decision? Is there an entry on the corporate risk register about the potential outcomes of inappropriate (to the occasion and decision) personalisation? At what level in an organisation does personalisation no longer apply, and who made this decision? Remember that there will always be people in the organisation that set up these systems and know the rules. What would your organisation do if one of these people blew a whistle?

Before implementing any ‘cognitive’ personalised search application in which the filters are invisible we should be asking employees if they are comfortable with this approach. The advent of AI and machine learning make this question increasingly important. We need to have a strong ethical justification if we are going to put each employee into an information bubble they did not design.

Martin White