The Editor’s in-tray

A small miscellany of search-related items that have arrived in my in-tray recently that you might be interested in

Microsoft Search Hero Mastermind Group

This is a new and very enterprising training course for Microsoft search managers that has a mix of tuition and mentoring spread out over a three month period. The course has been developed by Agnes Molnar, Search Explained. The course is spread out over a three month period.

What Is ChatGPT Doing … and Why Does It Work?

Stephen Wolfram has written a very detailed account of the mathematics and algorithms behind LLMs which is so long that it is now available as a paperback book. It is very well written and although some of the sections slightly passed over my head it provides a level of detail that is missing from the many (many!) academic papers that have been published over the last few months which assume a PhD-level of knowledge. Details of the book are on the web site.

IBM videos on LLMs and Machine Learning

I have been both educated and fascinated by some recent short (around 10 minutes) videos of the science behind language models. The fascination is that the IBM presenters write backwards on a transparent sheet. You have to watch the video to see what I mean, but the videos themselves are a model of clarity and communication

Generative Models Explained

Can You Trust Large Language Models

AI versus Machine Learning

Foundation Models and Fair Use

This arXiv paper from a team at the Center for Research on Foundation Models is a paper you should at least be aware of even if you currently don’t have the time to work through 61 pages and a great many citations. This paper is just on fair use under US copyright law, which is somewhat different in detail from other jurisdictions. Nevertheless it raises some very important issues that arise from the content-related aspects of LLMs rather than the technology. I can do no better than to reproduce the conclusions to the paper.

“We reviewed U.S. fair use standards and analyzed the risks of foundation models when evaluated against those standards in a number of concrete scenarios with real model artifacts. Additionally, we also discussed mitigation strategies and their respective strengths and limitations. As the law is murky and evolving, our goal is to delineate the legal landscape and present an exciting research agenda that will improve model quality overall, further our understanding of foundation models, and help make models more in line with fair use doctrine. By pursuing mitigation strategies that can respect the ethics and legal standards of intellectual property law, machine learning researchers can help shape the law going forward. But we emphasize that even if fair use is met to the fullest, the impacts to some data creators will be large. We suggest that further work is needed to identify policies that can effectively manage and mitigate these impacts, where the technical mitigation strategies we propose here will fundamentally fall short. We hope that this guide will be useful to machine learning researchers and practitioners, as well as lawyers, judges, and policymakers thinking about these issues.”


About Martin White
Martin White

Martin is an information scientist and the author of Making Search Work and Enterprise Search. He has been involved with optimising search applications since the mid-1970s and has worked on search projects in both Europe and North America. Since 2002 he has been a Visiting Professor at the Information School, University of Sheffield and is currently working on developing new approaches to search evaluation.

Leave a Reply

You must be logged in to post a comment.