The role of snippets in assessing the potential relevance of search results

The message coming from the enterprise search vendors at present is that AI (usually vaguely defined) can solve all search problems and present a very high percentage of relevant items on the first page of results.  This ‘machine’ ranking of relevance is determined by software algorithms. Relevance is in the eye and mind of the searcher, based on intent, context and their existing knowledge base. Two people sitting alongside each other with similar roles and responsibilities may have significantly different views on what results are relevant. One size does not fit all!

It is also important to consider the potential benefits of presenting the metadata values within the snippet. Metadata can be an important starting point for query reformulation, and in an enterprise context the inclusion of the author(s) could provide an assessment of the scope and quality of the content item and confirm (or otherwise) whether it would be of value to contact them for further information and advice.

However,  the extent of the metadata presentation could have implications on the length of the snippet (and the extent to which the ‘ten results’ exceed a single SERP page length) and on the scanning speed of the results page.

A search user has to be able to make an informed judgement of the relevance of content in the SERP (Search Engine Results Page) that enables them to consistently and with confidence  relevant content that meets their personal information requirement, reinforces their trust in the application and maintains the highest possible level of overall search satisfaction.

Working through search results takes time; claims by search vendors about the ‘blazing speed’ of their search application should be matched against the quality of the snippets they offer (there are many snippet generation options) and as a result the time taken by the user to select a small number of highly relevant results. Judging relevance from a snippet is of course just the starting point.

There is a substantial amount of research on snippets for web search queries but in general these snippets are linking to a web page which can be scanned and assessed reasonably quickly. In enterprise search the content item could be several hundred pages long and in one of many file formats.

A fairly common option from search vendors is to provide a thumbnail of a page that contains the query term, but the accessibility problems arising from having to view a small image displayed as the result of very precise mouse control are ignored, as is extent of the challenge for users with dyslexia.

Another factor in enterprise search is that even though the results are ‘relevant’ the user does not click on them because they already have the documents from other sources. They may be looking to add to their existing collection of information or just to confirm to themselves that they do indeed have the most relevant information. The lack of click traffic makes assessing search performance based on click-throughs to the document an unreliable metric.

If you would like to learn more about managing the user interface it is one of the topics covered in my one-day on-site enterprise search management training course. I have many examples of good and not-so-good UIs.

Martin White

Further reading

Some recent research papers on snippet management are listed below.  They were open access when the links were checked prior to publication.

Search Interfaces for Biomedical Searching: How do Gaze, User Perception, Search Behaviour and Search Performance Relate?

Featured Snippets and their Influence on Users’ Credibility Judgements

Explaining Documents’ Relevance to Search Queries

A Study of Snippet Length and Informativeness: Behaviour, Performance and User Experience

Less is Less: When Are Snippets Insufficient for Human vs Machine Relevance Estimation?