KITP Ranks First In Research Impact

How can the performance of the Kavli Institute for Theoretical Physics be measured and assessed? Anne Kinney, in an article published in the Nov. 13, 2007, issue of the Proceedings of the National Academy of Sciences, presents a new way of assessing “National Scientific Facilities and Their Science Impact on Nonbiomedical Research.” Her approach enables comparison of the impact of research fostered in conjunction with a given facility. The KITP with a score of 6.56 registers the highest impact-index reported.

The KITP is not only the highest ranked National Science Foundation (NSF) facility, but the highest ranked of all research facilities, including (in order from second to fourth) the highest ranked department —  Astronomy at Berkeley with an impact-index of 6.30; the highest ranked university —  Harvard with an impact-index of 6.15; and the highest ranked national laboratory —  Stanford Linear Accelerator (SLAC) with an impact-index of 5.65.

Kinney, who serves as director of the Sastro physics division in NASA’s science mission directorate, describes Berkeley’s Astronomy Department (with an impact index second only to KITP in her assessment) as “traditionally…one of the highest ranked scientific groups and so serves as a gold standard.” Presumably that standard also applies to the facility with the highest impact-index.

Assessing the quality of KITP programming performance has proved challenging for the institute’s directors because much of the measurable effect, in terms of quality research, is manifested in publications by visiting scientists whose primary institutional affiliation is elsewhere.

This way in which KITP differs from other institutions with which it is being comparatively evaluated is underscored in an explanatory footnote to the PNAS article: “KITP is a theoretical physics institute that regularly organizes conferences and longterm workshops in physical sciences. Many of the authors of papers with KITP affiliation are visitors, with other home institutions.”

The key relevant question addressed in the PNAS article is how to compare research facilities whose size and nature differ appreciably from one another. The answer is a “normalized h index.”

The number of citations to a published paper is usually a good assessment of the quality of the research reported in that paper because being cited in papers by other scientists demonstrates the impact of the cited work on the research of others. The higher the impact, the more significant is the research of the source.

The h index, pertaining to the body of a researcher’s work, is a more definitive indicator of overall quality. The h index of a scientist is the largest number of his/her publications cited more than h times. The bigger an individual’s h index, the more highly cited papers the individual has published. This h index is now widely used to measure the citation impact of an individual.

Ignoring big tails in the citation distribution, the h index stresses not the most influential papers, but the pattern of influential papers and therefore career productivity. For instance, the h index would not unduly weight a review article that is highly cited, but therefore is the exception rather than the rule in a given scientist’s pattern of citation.

The PNAS article uses this h index for individuals cumulatively to measure the research impact of science research facilities (including mathematics and the engineering fields, but excluding biomedical science) wherein these individuals function. Plotting h versus the size of an institution turned up a simple scaling law: for a large group, h scales as the number of papers to the 2/5 power. Thereby the h indices of institutions can be normalized to factor out differences such as size. In other words, a physics department with 50 faculty members can thereby be compared equitably to a physics department with 100 faculty, since one would expect the absolute number of published paper citations by the latter to outnumber the former simply because there are twice as many scientists publishing papers that can be cited.

The normalized h index enables further comparisons of different types of research environments such as universities as a whole, individual departments within universities, national laboratories, and institutes. It provides, in effect, a way of comparing apples and oranges.

Visitors to the KITP are asked to acknowledge, in their papers, the work done at the KITP. Such acknowledgement is standard procedure pertaining to working visits in the academic community. In its first place assessment of the KITP, the PNAS article used “data from even years beginning with 1980 and ending with 1998.” The reason given for the 1998 cutoff is the need to allow time for citations to accrue.

As the h index factors out the anomalously highly cited paper, whether Nobel Prize winner or review article, so the normalized h index factors out the presence of one stellar and oft cited scientist in the midst of a pedestrian group. So what is being assessed is the overall scientific efficacy of environments as producers of significant scientific research, and in the physical sciences, including mathematics, the KITP achieves the highest impact.

How well does KITP achieve its mission of creating an environment to enhance significant scientific research? The answer, according to the PNAS article, is better than any other nonbiomedical research facility in the United States and, therefore, probably the world.

 


KITP Newsletter, Spring 2008