Research evaluation metrics


Book Description

Traducción parcial de la Introducción: "En la actualidad, la evaluación de la investigaciones es una cuestión que se está replanteando en todo el mundo. En algunos casos, los trabajos de investigación están generando resultados muy buenos, en la mayoría de los casos los resultados son mediocres, y en algunos casos negativos. Por todo esto, la evaluación de los resultados de la investigación se convierte en una condición sine qua non. Cuando el número de investigadores eran menos, eran los propios colegas de profesión quienes evaluaban la investigación. Con el paso del tiempo, el número de investigadores aumentó, las áreas de investigación proliferaron, los resultados de la investigación se multiplicaron. La tendencia continuó y después de la Segunda Guerra Mundial, la investigación comenzó a crecer exponencialmente. Hoy en día, incluso en una estimación moderada hay alrededor de más de un millón de investigadores y producen más de dos millón de trabajos de investigación y otros documentos por año. En este contexto, la evaluación de la investigación es una cuestión de primera importancia. Para cualquier promoción, acreditación, premio y beca puede haber decenas o cientos de nominados. De entre éstos, seleccionar el mejor candidato es una cuestión difícil de determinar. Las evaluaciones inter pares en muchos casos están demostrando ser subjetivas. En 1963 se crea Science Citation Index (SCI) que cubre la literatura científica desde 1961. Unos años después, Eugene Garfield, fundador del SCI, preparó una lista de los 50 autores científicos más citados basándose en las citas que recibía el trabajo de un autor por parte de los trabajos de otros colegas de investigación. El documento titulado "¿Pueden predecirse los ganadores del Premio Nobel? 'Fue publicado en 1968 (Garfield y Malin, 1968). En el siguiente año es decir, 1969, dos científicos que figuran en la lista, por ejemplo, Derek HR Barton y Murray Gell-Mann recibieron el codiciado premio. Esto reivindicó la utilidad del análisis de citas. Cada año, varios científicos pertenecientes al campo de la Física, Química, Fisiología y Medicina reciben el Premio Nobel. De esta manera el análisis de citas se convirtió en una herramienta útil. Sin embargo, el análisis de citas siempre tuvo críticas y múltiples fallas. Incluso Garfield comentó - "El Uso del análisis de citas de los trabajos de evaluación es una tarea difícil. Existen muchas posibilidades de error '(Garfiled, 1983). Para la evaluación de la investigación, se necesitaban algunos otros indicadores. El análisis de citas, junto con la revisión por pares garantiza el mejor juicio en innumerables casos. Pero se necesita algo que sea más exacto. La llegada de la World Wide Web (WWW) brindó la oportunidad; pues un buen número de indicadores se están generando a partir de los datos disponibles en la WWW". (Trad. Julio Alonso Arévalo. Univ. Salamanca).




Evaluative Informetrics: The Art of Metrics-Based Research Assessment


Book Description

We intend to edit a Festschrift for Henk Moed combining a “best of” collection of his papers and new contributions (original research papers) by authors having worked and collaborated with him. The outcome of this original combination aims to provide an overview of the advancement of the field in the intersection of bibliometrics, informetrics, science studies and research assessment.




Citation Analysis in Research Evaluation


Book Description

This book is written for members of the scholarly research community, and for persons involved in research evaluation and research policy. More specifically, it is directed towards the following four main groups of readers: – All scientists and scholars who have been or will be subjected to a quantitative assessment of research performance using citation analysis. – Research policy makers and managers who wish to become conversant with the basic features of citation analysis, and about its potentialities and limitations. – Members of peer review committees and other evaluators, who consider the use of citation analysis as a tool in their assessments. – Practitioners and students in the field of quantitative science and technology studies, informetrics, and library and information science. Citation analysis involves the construction and application of a series of indicators of the ‘impact’, ‘influence’ or ‘quality’ of scholarly work, derived from citation data, i.e. data on references cited in footnotes or bibliographies of scholarly research publications. Such indicators are applied both in the study of scholarly communication and in the assessment of research performance. The term ‘scholarly’ comprises all domains of science and scholarship, including not only those fields that are normally denoted as science – the natural and life sciences, mathematical and technical sciences – but also social sciences and humanities.




The Metric Tide


Book Description

‘Represents the culmination of an 18-month-long project that aims to be the definitive review of this important topic. Accompanied by a scholarly literature review, some new analysis, and a wealth of evidence and insight... the report is a tour de force; a once-in-a-generation opportunity to take stock.’ – Dr Steven Hill, Head of Policy, HEFCE, LSE Impact of Social Sciences Blog ‘A must-read if you are interested in having a deeper understanding of research culture, management issues and the range of information we have on this field. It should be disseminated and discussed within institutions, disciplines and other sites of research collaboration.’ – Dr Meera Sabaratnam, Lecturer in International Relations at the School of Oriental and African Studies, University of London, LSE Impact of Social Sciences Blog Metrics evoke a mixed reaction from the research community. A commitment to using data and evidence to inform decisions makes many of us sympathetic, even enthusiastic, about the prospect of granular, real-time analysis of our own activities. Yet we only have to look around us at the blunt use of metrics to be reminded of the pitfalls. Metrics hold real power: they are constitutive of values, identities and livelihoods. How to exercise that power to positive ends is the focus of this book. Using extensive evidence-gathering, analysis and consultation, the authors take a thorough look at potential uses and limitations of research metrics and indicators. They explore the use of metrics across different disciplines, assess their potential contribution to the development of research excellence and impact and consider the changing ways in which universities are using quantitative indicators in their management systems. Finally, they consider the negative or unintended effects of metrics on various aspects of research culture. Including an updated introduction from James Wilsdon, the book proposes a framework for responsible metrics and makes a series of targeted recommendations to show how responsible metrics can be applied in research management, by funders, and in the next cycle of the Research Excellence Framework. The metric tide is certainly rising. Unlike King Canute, we have the agency and opportunity – and in this book, a serious body of evidence – to influence how it washes through higher education and research.




Gaming the Metrics


Book Description

How the increasing reliance on metrics to evaluate scholarly publications has produced new forms of academic fraud and misconduct. The traditional academic imperative to “publish or perish” is increasingly coupled with the newer necessity of “impact or perish”—the requirement that a publication have “impact,” as measured by a variety of metrics, including citations, views, and downloads. Gaming the Metrics examines how the increasing reliance on metrics to evaluate scholarly publications has produced radically new forms of academic fraud and misconduct. The contributors show that the metrics-based “audit culture” has changed the ecology of research, fostering the gaming and manipulation of quantitative indicators, which lead to the invention of such novel forms of misconduct as citation rings and variously rigged peer reviews. The chapters, written by both scholars and those in the trenches of academic publication, provide a map of academic fraud and misconduct today. They consider such topics as the shortcomings of metrics, the gaming of impact factors, the emergence of so-called predatory journals, the “salami slicing” of scientific findings, the rigging of global university rankings, and the creation of new watchdogs and forensic practices.




Public Relations Metrics


Book Description

Responding to the increasing need in academia and the public relations profession, this volume presents the current state of knowledge in public relations measurement and evaluation. The book brings together ideas and methods that can be used throughout the world, and scholars and practitioners from the United States, Europe, Asia, and Africa are represented.




Meaningful Metrics


Book Description

Research libraries have engaged in publishing activities in the past, but recently there has been intense growth in the number of library publishing services supporting faculty and students. Unified by a commitment to both access and service, library publishing programs have grown from an early focus on backlist digitization to publication of student works, textbooks, and research data. This growing engagement with publishing is a natural and research data. This growing engagement with publishing is a natural extension of the academic library's commitment to support the creation of and access to scholarship. Getting the Word Out examines the growing trend in library publishing with 11 chapters by some of the most talented thinkers in the field. Edited by library publishing experts Maria Bonn, of the University of Illinois Urbana-Champaign Graduate School of Library and Information Science, and Mike Furlough, HathiTrust Digital Library, this book deepens current discussions in the field, and provides decision makers and practitioners with an introduction to the state of the field with an eye towards future prospects. -- from back cover.




Measuring Research


Book Description

Policy makers, academic administrators, scholars, and members of the public are clamoring for indicators of the value and reach of research. The question of how to quantify the impact and importance of research and scholarly output, from the publication of books and journal articles to the indexing of citations and tweets, is a critical one in predicting innovation, and in deciding what sorts of research is supported and whom is hired to carry it out. There is a wide set of data and tools available for measuring research, but they are often used in crude ways, and each have their own limitations and internal logics. Measuring Research: What Everyone Needs to Know(R) will provide, for the first time, an accessible account of the methods used to gather and analyze data on research output and impact. Following a brief history of scholarly communication and its measurement -- from traditional peer review to crowdsourced review on the social web -- the book will look at the classification of knowledge and academic disciplines, the differences between citations and references, the role of peer review, national research evaluation exercises, the tools used to measure research, the many different types of measurement indicators, and how to measure interdisciplinarity. The book also addresses emerging issues within scholarly communication, including whether or not measurement promotes a "publish or perish" culture, fraud in research, or "citation cartels." It will also look at the stakeholders behind these analytical tools, the adverse effects of these quantifications, and the future of research measurement.




Clinical Text Mining


Book Description

This open access book describes the results of natural language processing and machine learning methods applied to clinical text from electronic patient records. It is divided into twelve chapters. Chapters 1-4 discuss the history and background of the original paper-based patient records, their purpose, and how they are written and structured. These initial chapters do not require any technical or medical background knowledge. The remaining eight chapters are more technical in nature and describe various medical classifications and terminologies such as ICD diagnosis codes, SNOMED CT, MeSH, UMLS, and ATC. Chapters 5-10 cover basic tools for natural language processing and information retrieval, and how to apply them to clinical text. The difference between rule-based and machine learning-based methods, as well as between supervised and unsupervised machine learning methods, are also explained. Next, ethical concerns regarding the use of sensitive patient records for research purposes are discussed, including methods for de-identifying electronic patient records and safely storing patient records. The book’s closing chapters present a number of applications in clinical text mining and summarise the lessons learned from the previous chapters. The book provides a comprehensive overview of technical issues arising in clinical text mining, and offers a valuable guide for advanced students in health informatics, computational linguistics, and information retrieval, and for researchers entering these fields.




Beyond Bibliometrics


Book Description

A comprehensive, state-of-the-art examination of the changing ways we measure scholarly performance and research impact.