About Metrics

Evaluating scientific work requires in-depth studies of research publications. But today’s academic reality often means we must make quick, quantitative comparisons. These comparisons rely on metrics for scientific impact whose purpose it is to quantify the relevance of research, both on the institutional level and on the individual level. Many scientists are rightfully skeptical of attempts to measure their success, but quantitative evaluation is and will remain necessary.

However, metrics for scientific impact, while necessary, can negatively influence the behavior of researchers. If a high score on a certain metric counts as success, this creates an incentive for researchers to work towards increasing their score, rather than using their own judgement for good research. This problem of “perverse incentives” and “gaming measures” is widely known, yet little has been done about it. With this web-interface we want to take a step towards improving the situation.

A major reason that metrics for scientific success can redirect research interests is that few metrics are readily available. While the literature on bibliometrics and scientometrics contains hundreds of proposals, most researchers presently draw on only a handful of indicators that are easy to obtain. This is notably the number of citations, the number of publications, the Hirsch-index, and the number of papers with high impact factor. Such a narrow definition of success can streamline research strategies and agendas. This brings the risk that scientific exploration becomes inefficient and stalls.

By offering SciMeter, we want to work against this streamlining. SciMeter allows everyone to create their own metric to capture what they personally consider the best possible way of quantifying scientific impact. These metrics can be used to evaluate individuals and to quickly sort lists of applicants. Since personal metrics can be kept private and can always be adapted, this counteracts streamlining and makes gaming impossible.

Databases

We currently offer bibliographic analysis for two different databases, that’s the and the . You can switch between these two databases in the top right corner.

The differences between the two databases are as follows.

ArXiv data is limited to the years past 1991, and to physics and related disciplines (such as mathematics and computer science). The arXiv does not provide citation data, and the citation data we obtain from a 3rd party (Paperscape) is incomplete and has not been updated since 2018. If you are interested in a general bibliometric analysis we therefore recommend that you use the MAG database.

However, we have several apps that currently only work for arXiv data because the arXiv, among other things, allows a full-text download (so we can evaluate the length of papers and extract keywords from the abstract), has category classifiers, and offers author identifiers (which can be linked to ORCID). For this reason, the keyword clouds and searches for topics and similar authors are presently available only for the arXiv data.

Support us

This web-interface has been built with support from the Foundational Questions Institute. If you would like to support further developments of this service, please contact us at .