Personalized Task Recommendation in Crowdsourcing Systems by David Geiger
The Personalized Task Recommendations in Crowdsourcing Systems is a tightly organized book, dense with details of a thorough research addressing the subject matter of crowd sourcing systems and automated, personalized task recommendations targeted at users of such systems.
Crowdsourcing systems are defined here as open, web-scale systems built to support and facilitate self-selected individuals who seek and perform tasks proposed by requesters, delivering (mainly) informational products and services.
Within this broad area, the research presented in the book focuses on improving the utility of such systems to the task-seekers and task-performing individuals by improvements in the precision of their task search / recommendation facilities.
As usual, the first chapters define and scope the subject of the research.
The first broadly introduces the subject, while the second defines and classifies crowdsourcing systems using the socio-technical perspective applied to information systems. It provides a broad two-dimensional classification of said systems in order to inform subsequent discussions.
The third chapter presents an in-depth analysis for the current state of the art of task recommender features and techniques in current crowdsourcing systems. A major part of the chapter is dedicated to a well-organized discussion evaluating this state of the art from multiple perspectives, such as: the sourcing of information / knowledge for recommendations, the use of past tasks and contributions in the system and of self-declared or confirmed skills and capabilities as recommendation drivers, the appropriateness of personalized recommendations techniques in various contexts and, finally, their practical utility.
The discussion is grounded in a systematic review of the research already published regarding both crowdsourcing systems in general and recommender systems and algorithms in general. It originally contributes a very clear systematization of the findings of existing research, from the perspectives of interest to the “intersection” of crowdsourcing concerns and recommender concerns, which is the specific subject of the research presented in the book.
From a practitioner perspective the next chapter is perhaps the most interesting.
It describes the requirements, architecture, design and implementation of a prototype third-party task recommendation service implemented for the Amazon Mechanical Turk crowdsourcing system. The design being presented consists of a domain model for crowdsourcing tasks, requesters, contributors and interactions, following the approach in Domain-Driven Design: Tackling Complexity in the Heart of Software (Evans, 2004) and Implementing Domain-Driven Design (Vernon, 2013). The recommender algorithm implemented in the service is then described. Finally, quite extensive key implementation details are provided regarding the interaction of the prototype task recommendation service with the ‘host’ crowdsourcing platform.
For designers and developers of crowdsourcing systems and associated services this chapter alone easily justifies reading the entire book.
The next chapter describes the field research undertaken using the recommendation service described in the previous chapter.
It is divided between the initial pilot study and survey and a large-scale evaluation of the recommendation service being actually used by a self-selected population of contributors to the crowdsourcing service. The methodologies used for each are presented clearly and concisely, with the usual emphasis on statistical data analysis that is customary in such research works.
The key conclusion of the chapter is that the personalized task recommendations produced by the service being tested were proven to match the task preferences of the contributors using it, at a statistically significant level above average chance – which is the basic hypothesis of the research being presented in the book.
The last chapter is a brief summary of the entire book.
From the perspective of researchers and professionals whose focus is information retrieval, the books may be judged as interesting for a variety of reasons.
It describes a quite specific information retrieval environment and scenario – searching for tasks to do in a crowdsourcing system, according to one’s preferences. However, certain features of this scenario could be extrapolated to other goal-directed and context-sensitive IR processes, e.g.: searching for items to research and buy as gifts; searching for best-match knowledge items or experts relevant to specific tasks; even searching for research papers within a set of inter-related domains.
Secondly, the approach taken to modelling and designing the prototype task recommendation service will definitely be a valuable inspiration source for designers and developers of any recommender system, including prototypes intended for field research using actual online systems. Furthermore, the technical focus that the recommendation technique and algorithm described in this book have on performance and practical results could be extrapolated to novel IR tasks that are time-sensitive and are performed by machines for the benefit of machines rather than humans, e.g. in a cloud workload monitoring and control system, where preferences and SLAs of a large number of workloads need to be searched and matched in real-time to the availability and current performance of executing nodes in order to support dynamic workload allocation and throttling.
Overall, I found Personalized Task Recommendation in Crowdsourcing Systems a systematic, well-written and also thought-provoking book.