And finally – from the Editor

I’ve included a conference report in this issue of Informer on the Alan Turing Institute conference on LLMs (held in London on 23 February) to give an indication of the speed of development of LLMs and related applications (ChatGPT, Microsoft Copilot and oh so many more!) over the last few months.

The opportunities for research into the performance and possibilities of LLMs (I’m using this in a very generic way) are both colossal and essential if we are to get the best from this technology and avoid the worst it has to offer. It has struck me that the publication of this research is not keeping up with the speed of development. Even in journals that pride themselves on early publication the papers have a historical perspective which is interesting but of questionable long term value. There is also the challenge of finding peer reviewers that have an appropriate level of expertise in the topics.

As I was completing work on this issue in mid-April I spotted two papers on arXiv providing a review of the published literature on what is now being notated as AI-Generated Content (AIGC). At least it is not a TLA! AI-Generated Content (AIGC): A Survey has 116 citations and One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era offers 226!

(In passing I might comment that ‘clever titles’ are not helpful and the authors do not explain how they know it is ‘complete’)

Looking through the lists of citations, what is immediately obvious is the number of references to pre-print papers, notably in arXiv. This of course raises the question about the validity of pre-prints without the sanity (hopefully!) of peer review. I came across a pre-print recently on the technology of IR and enterprise search that was just so full of inaccuracies I was almost moved to tears. This paper was of course in a topic area I’ve been following since the late 1970s  – how much trust should I place in other papers in arXiv on these current topics?

I do not want to be seen to dismiss all pre-prints and pre-print servers. Over the last few months there have been many exceptional papers, but my judgement has been based (arguably biased) on the institutions of the authors and on their biographical profiles. Even so I have added 70 papers to my collection since 1 January and my focus has been very much on research that has a potential impact on enterprise search.

I have no immediate solutions but that does make tracking both the research outcomes and the commercial offerings from an increasingly large number of profit-chasing vendors very challenging. My reference to ‘profit’ is a reminder that in the end someone has to pay the bills for the computational power needed to make all this technology work, and what is very noticeable from the vast amounts of vendor PR I am seeing is that there are no indications of the pricing models for the commercial versions of any of their current offerings, and that includes the cost of water. Perhaps all LLM variants should come with an environmental impact statement!

PS If you are an academic wondering what to do next then read Choose Your Weapon: Survival Strategies for Depressed AI Academics by Julian Toelius and Georgios Tannakakis. Simply splendid – best read with a drink in your hand!

PPS  Are you aware of arXiv Vanity which renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Good for editors of a certain age!!
About Martin White
Martin White

Martin is an information scientist and the author of Making Search Work and Enterprise Search. He has been involved with optimising search applications since the mid-1970s and has worked on search projects in both Europe and North America. Since 2002 he has been a Visiting Professor at the Information School, University of Sheffield and is currently working on developing new approaches to search evaluation.

Leave a Reply

You must be logged in to post a comment.