Search Solutions 2020 – Conference report

That Search Solutions 2020 took place at all is a tribute to superb teamwork from Ingo Frommholz, Udo Kruschwitz, Haiming Liu, Tony Russell-Rose, Steven Zimmerman and myself aided by Slack. By the end of the development phase almost 100 Slack messages had arrived in my in-box! Search Solutions is a very important source of revenue for IRSG but clearly that was not going to be the case with a virtual event. The benefit was that we could include speakers from the Netherlands, Hungary, the USA and Canada without incurring any travel costs, making the event probably the most international ever staged.

The programme is on the IRSG web site and this has links to the presenters and abstracts of the papers. I am not going to try to summarise the presentations. Most were recorded but a decision on the release of these to non-participants has not yet been made.

The primary objective of Search Solutions is to bring together the information retrieval and search communities, and that does not just mean ‘enterprise search’. Charlie Hull (OpenSource Connections) gave a live critique of the Sainsbury’s e-commerce platform, Jeremy Pickens (Open Text) explored the challenges of e-discovery, Paul Levay (NICE) outlined the importance of search in undertaking critical reviews and Paul Cleverley (Robert Gordon University) presented a fascinating analysis of the way in which an enterprise search application in an oil & gas company accommodated user requirements for information related to the Covid epidemic. This research is due to be published in the near future.

One the information retrieval side Elaine Toms (University of Sheffield)  provided the ideal keynote with a paper entitled Conceptualising Search as a Set of Cognitive Prostheses. I’m not even going to try to summarise it but will just pick up a few elements of her slide deck.

To begin

  • The system only knows what the user enters into the search box
  • The system selectively knows about the user’s environment, albeit in a limited fashion, e.g., previous search queries, documents clicked on, scrolling, personal location,…
  • The system does not know which of these variables will influence how the user interprets relevance

If you are wondering what a cognitive prothesis is, the definition provided by Elaine was ‘an electronic computational device that extends the capability of human cognition or sense perception’ and she proceeded to consider what these could be.

Elaine’s final slide suggested that possible approaches were

  • Common integrated interface for ‘knowledge work’ – think Microsoft Office ribbons on steroids
  • A ‘dashboard’ that includes Search + eDiscovery + Text Analytics + Data Analytics – an integrated tool – a swiss-army knife – for information access, retrieval and use
  • Need new thinking about what IR R&D needs to achieve and also new models and frameworks for how we think about the role of search in real-world tasks

Finally she raised the question about ultimately where does/should the Human Stop and the Machine Start?

Elaine was followed by David Maxwell (University of Delft) building on his outstanding PhD thesis by talking about searching, stopping and user modelling, with particular reference to stopping heuristics and information scent/foraging as a model for searching. He certainly won the (non-existent) award for the best designed slide deck.

Agnes Molnar (Search Explained, Hungary) brought us back to reality by taking about how important information quality and consistency is to the attainment of a good search experience, and that exploring potential issues around these two content facets could take some time to resolve when working with a client.

The final trio of presentations were all deep dives into important issues in information retrieval research and development. Marianne Sweeny (Daedalus Information Systems) gave a very good introduction to neural IR (increasingly abbreviated to Neu-IR) and the role of information behaviours in optimizing neural IR systems. Along the way she mentioned a paper by Francois Cholet (Google) entitled On the Measure of Intelligence, running to over 70 pages. When I Tweeted the link I had over 27.000 impressions! Michael Bendesky (Google) followed up with a very clear account of the work that Google has been undertaken in Learning to Rank in Tensor Flow. (I found the use of TR for Tensor Flow confusing when for years TF has been Term Frequency!) and finally Ricardo Baesa-Yates (Northeastern University at Silicon Valley) talked about the complex issues around fairness and bias in information retrieval.

Each speaker had a 20 minute slot and then 10 minutes for questions, and the programme ran no more than a few minutes late at any point during the day. Steve Zimmerman played a crucial role in this outcome manning the  technical support help desk.

The number of participants was very similar to those in previous years, which was gratifying. Overall (and of course I am biased) it was a very successful event.

About Martin White
Martin White

Martin is an information scientist and the author of Making Search Work and Enterprise Search. He has been involved with optimising search applications since the mid-1970s and has worked on search projects in both Europe and North America. Since 2002 he has been a Visiting Professor at the Information School, University of Sheffield and is currently working on developing new approaches to search evaluation.

Leave a Reply

You must be logged in to post a comment.