Thompson (formerly ISI) uses an imprecise and inconsistent method to compute journal impact factors and, even worse, refuses to release the raw data so that scores can be independently verified. Journals typically require authors to make data public as a condition of publication; why use rankings based on hidden data? Writes RePEc: “[A]ll of us should treat impact factors and citation data with considerable caution. Basing journal rankings, tenure, promotion, and raises on uncritical acceptance of [these] data is a poor idea.”
It would be nice to have more information about the magnitude and direction of the potential bias. Do these problems affect the rank ordering of journals, or simply the precision of the point estimates? Is there any research on this problem?
Friday, 4 January 2008
Are Journal Impact Factors Reliable?
The answer given by Peter Klein at Organisation and Markets is "not really". Klein writes,
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment