Experiment and Evaluation in Information Retrieval Models


Book Description

Experiment and Evaluation in Information Retrieval Models explores different algorithms for the application of evolutionary computation to the field of information retrieval (IR). As well as examining existing approaches to resolving some of the problems in this field, results obtained by researchers are critically evaluated in order to give readers a clear view of the topic. In addition, this book covers Algorithmic Solutions to the Problems in Advanced IR Concepts, including Feature Selection for Document Ranking, web page classification and recommendation, Facet Generation for Document Retrieval, Duplication Detection and seeker satisfaction in question answering community Portals. Written with students and researchers in the field on information retrieval in mind, this book is also a useful tool for researchers in the natural and social sciences interested in the latest developments in the fast-moving subject area. Key features: Focusing on recent topics in Information Retrieval research, Experiment and Evaluation in Information Retrieval Models explores the following topics in detail: Searching in social media Using semantic annotations Ranking documents based on Facets Evaluating IR systems offline and online The role of evolutionary computation in IR Document and term clustering, Image retrieval Design of user profiles for IR Web page classification and recommendation Relevance feedback approach for Document and image retrieval







Experiment and Evaluation in Information Retrieval Models


Book Description

Experiment and Evaluation in Information Retrieval Models explores different algorithms for the application of evolutionary computation to the field of information retrieval (IR). As well as examining existing approaches to resolving some of the problems in this field, results obtained by researchers are critically evaluated in order to give readers a clear view of the topic. In addition, this book covers Algorithmic Solutions to the Problems in Advanced IR Concepts, including Feature Selection for Document Ranking, web page classification and recommendation, Facet Generation for Document Retrieval, Duplication Detection and seeker satisfaction in question answering community Portals. Written with students and researchers in the field on information retrieval in mind, this book is also a useful tool for researchers in the natural and social sciences interested in the latest developments in the fast-moving subject area. Key features: Focusing on recent topics in Information Retrieval research, Experiment and Evaluation in Information Retrieval Models explores the following topics in detail: Searching in social media Using semantic annotations Ranking documents based on Facets Evaluating IR systems offline and online The role of evolutionary computation in IR Document and term clustering, Image retrieval Design of user profiles for IR Web page classification and recommendation Relevance feedback approach for Document and image retrieval




Introduction to Information Retrieval


Book Description

Class-tested and coherent, this textbook teaches classical and web information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. It gives an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections. All the important ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced undergraduates and graduate students in computer science. Based on feedback from extensive classroom experience, the book has been carefully structured in order to make teaching more natural and effective. Slides and additional exercises (with solutions for lecturers) are also available through the book's supporting website to help course instructors prepare their lectures.




Test Collection Based Evaluation of Information Retrieval Systems


Book Description

Use of test collections and evaluation measures to assess the effectiveness of information retrieval systems has its origins in work dating back to the early 1950s. Across the nearly 60 years since that work started, use of test collections is a de facto standard of evaluation. This monograph surveys the research conducted and explains the methods and measures devised for evaluation of retrieval systems, including a detailed look at the use of statistical significance testing in retrieval experimentation. This monograph reviews more recent examinations of the validity of the test collection approach and evaluation measures as well as outlining trends in current research exploiting query logs and live labs. At its core, the modern-day test collection is little different from the structures that the pioneering researchers in the 1950s and 1960s conceived of. This tutorial and review shows that despite its age, this long-standing evaluation method is still a highly valued tool for retrieval research.




Online Evaluation for Information Retrieval


Book Description

Provides a comprehensive overview of the topic. It shows how online evaluation is used for controlled experiments, segmenting them into experiment designs that allow absolute or relative quality assessments. It also includes an extensive discussion of recent work on data re-use, and experiment estimation based on historical data.




Information Retrieval


Book Description

An introduction to information retrieval, the foundation for modern search engines, that emphasizes implementation and experimentation. Information retrieval is the foundation for modern search engines. This textbook offers an introduction to the core topics underlying modern search technologies, including algorithms, data structures, indexing, retrieval, and evaluation. The emphasis is on implementation and experimentation; each chapter includes exercises and suggestions for student projects. Wumpus—a multiuser open-source information retrieval system developed by one of the authors and available online—provides model implementations and a basis for student work. The modular structure of the book allows instructors to use it in a variety of graduate-level courses, including courses taught from a database systems perspective, traditional information retrieval courses with a focus on IR theory, and courses covering the basics of Web retrieval. In addition to its classroom use, Information Retrieval will be a valuable reference for professionals in computer science, computer engineering, and software engineering.




Current Challenges in Patent Information Retrieval


Book Description

Patents form an important knowledge resource –much technical information represented in patents is not represented in scientific literature – and at the same time they are important, and economically highly relevant, legal documents. Between 1998 and 2008, the number of patent applications filed yearly worldwide grew by more than 50 percent. Yet still we see a huge gap between, on the one hand, the technologies that emerged from research labs and are in use in major Internet search engines or in enterprise search systems, and, on the other hand, the systems used daily by the patent search communities. In the past few years, the editors have organized a series of events at the Information Retrieval Facility in Vienna, Austria, bringing together leading researchers in information retrieval (IR) and those who practice and use patent search, thus establishing an interdisciplinary dialogue between the IR and the intellectual property (IP) communities and creating a discursive as well as empirical space for sustainable discussion and innovation. This book is among the results of that joint effort. Many of the chapters were written jointly by IP and IR experts, while all chapters were reviewed by representatives of both communities, resulting in contributions that foster the proliferation and exchange of knowledge across fields and disciplinary mindsets. Reflecting the efforts and views of both sides of the emerging patent search research and innovation community, this is a carefully selected, organized introduction to what has been achieved, and perhaps even more significantly to what remains to be achieved. The book is a valuable resource for IR researchers and IP professionals who are looking for a comprehensive overview of the state of the art in this domain.




Information Retrieval Evaluation


Book Description

Evaluation has always played a major role in information retrieval, with the early pioneers such as Cyril Cleverdon and Gerard Salton laying the foundations for most of the evaluation methodologies in use today. The retrieval community has been extremely fortunate to have such a well-grounded evaluation paradigm during a period when most of the human language technologies were just developing. This lecture has the goal of explaining where these evaluation methodologies came from and how they have continued to adapt to the vastly changed environment in the search engine world today. The lecture starts with a discussion of the early evaluation of information retrieval systems, starting with the Cranfield testing in the early 1960s, continuing with the Lancaster "user" study for MEDLARS, and presenting the various test collection investigations by the SMART project and by groups in Britain. The emphasis in this chapter is on the how and the why of the various methodologies developed. The second chapter covers the more recent "batch" evaluations, examining the methodologies used in the various open evaluation campaigns such as TREC, NTCIR (emphasis on Asian languages), CLEF (emphasis on European languages), INEX (emphasis on semi-structured data), etc. Here again the focus is on the how and why, and in particular on the evolving of the older evaluation methodologies to handle new information access techniques. This includes how the test collection techniques were modified and how the metrics were changed to better reflect operational environments. The final chapters look at evaluation issues in user studies -- the interactive part of information retrieval, including a look at the search log studies mainly done by the commercial search engines. Here the goal is to show, via case studies, how the high-level issues of experimental design affect the final evaluations. Table of Contents: Introduction and Early History / "Batch" Evaluation Since 1992 / Interactive Evaluation / Conclusion




Simulating Information Retrieval Test Collections


Book Description

Simulated test collections may find application in situations where real datasets cannot easily be accessed due to confidentiality concerns or practical inconvenience. They can potentially support Information Retrieval (IR) experimentation, tuning, validation, performance prediction, and hardware sizing. Naturally, the accuracy and usefulness of results obtained from a simulation depend upon the fidelity and generality of the models which underpin it. The fidelity of emulation of a real corpus is likely to be limited by the requirement that confidential information in the real corpus should not be able to be extracted from the emulated version. We present a range of methods exploring trade-offs between emulation fidelity and degree of preservation of privacy. We present three different simple types of text generator which work at a micro level: Markov models, neural net models, and substitution ciphers. We also describe macro level methods where we can engineer macro properties of a corpus, giving a range of models for each of the salient properties: document length distribution, word frequency distribution (for independent and non-independent cases), word length and textual representation, and corpus growth. We present results of emulating existing corpora and for scaling up corpora by two orders of magnitude. We show that simulated collections generated with relatively simple methods are suitable for some purposes and can be generated very quickly. Indeed it may sometimes be feasible to embed a simple lightweight corpus generator into an indexer for the purpose of efficiency studies. Naturally, a corpus of artificial text cannot support IR experimentation in the absence of a set of compatible queries. We discuss and experiment with published methods for query generation and query log emulation. We present a proof-of-the-pudding study in which we observe the predictive accuracy of efficiency and effectiveness results obtained on emulated versions of TREC corpora. The study includes three open-source retrieval systems and several TREC datasets. There is a trade-off between confidentiality and prediction accuracy and there are interesting interactions between retrieval systems and datasets. Our tentative conclusion is that there are emulation methods which achieve useful prediction accuracy while providing a level of confidentiality adequate for many applications.