***** To join INSNA, visit http://www.insna.org *****
Measuring Research Output with Science & Technology Indicators
The measurement of research output and the ranking of universities has
become an industry in itself. Ranking, however, is based on reducing the
complexity to a single number. The weighting of different dimensions remains
a problem. Research output measurements are based on indicators such as
impact factors. These indicators have been clearly defined, but for other
purposes (Garfield, 1979). For example, impact factors can vary by an order
of magnitude between mathematics and the life-sciences. Would a university
be well advised to close its mathematics department in order to improve its
Because publication and citation rates differ significantly among fields of
science, universities-or analogously nations-are too heterogeneous for
accurate comparison (Collins, 1985). Fields of science, however, cannot
clearly be decomposed because the journal sets overlap. Subject
categorization works well in the core sets, but not at interfaces
(Leydesdorff, 2006a). Thus, field-normalization is necessarily burdened with
technical decisions which may heavily influence the resulting rankings.
Independently of the differences among fields of science, publications come
in different types. By using a relatively short time window-as in the case
of the impact factor-journals which publish letters and fields with
fast-moving research fronts will be favoured. Review journals, for example,
have "cited half-life times" significantly longer than letters (Leydesdorff,
Should one then give up? Can information science or sociology help us to
improve the measurement? The systems of scientific communication and
technological innovation provide us with rich domains for studying the
dynamics of science, technology, and innovation (Moed et al., 2004). The
communication of knowledge can be measured, modeled, and simulated. However,
knowledge transfer is not linear; hence, one needs models from non-linear
dynamics. Knowledge can be considered as a meaning which makes a difference
and potentially reduces uncertainty (Leydesdorff, 2006b). The study of
science, technology, and innovation provides us with measurement tools for
variables which are used outside the context of analytical perspectives
(Figure 1) when legitimating allocation decisions.
Figure 1: Different perspectives in the study of science and technology.
In a recent research project, we were granted access to the funding
decisions of one of the research councils in the Netherlands (Van den
Besselaar & Leydesdorff, 2007). We found that in the case of matched pairs
of positive and negative funding decisions, the rejected authors had
significantly higher publication and citation rates than the funded ones.
Funding decisions are riddled with institutional bias, programmatic
preferences, etc. (Wenneras & Wold, 1997; Bornmann & Daniel, 2006).
Thus, one is caught between the Scylla of peer-review and the Charybdis of
statistical analysis. Intellectual caution is the advice! When Harvard
University appears at the top of the rankings, one could raise a question
about publications and citations per dollar. In terms of productivity,
European universities may be more efficient than American ones because of
the huge differences in their budgets (Dosi et al., 2006).
Amsterdam School of Communications Research (ASCoR),
Kloveniersburgwal 48, 1012 CX Amsterdam;
[log in to unmask] <http://www.leydesdorff.net/>
SOCNET is a service of INSNA, the professional association for social
network researchers (http://www.insna.org). To unsubscribe, send
an email message to [log in to unmask] containing the line
UNSUBSCRIBE SOCNET in the body of the message.