The journals in which scientists publish can make or break their career. A scientist must publish in “leading” journals, with high Journal Impact Factor (JIF), (you can see it presented proudly on high-impact journals’ websites). The JIF has gone popular partly because it gives an “objective” measure of a journal’s quality and partly because it’s a neat little number which is relatively easy to understand. It’s widely used by academic librarians, authors, readers and promotion committees.
Raw citation counts emerged at the 20′s of the previous century and were used mainly by science librarians who wanted to save money and shelf space by discovering which journals make the best investment in each field. This method had a modest success, but it didn’t gain much momentum until the sixties. That could be because said librarians had to count citations by hand.
In 1955, Eugene Garfield published a paper in Science where he discussed the idea of an Impact Factor based on citations for the first time. By 1964, he and his partners published the Science Citation Index (SCI). (Of course, this is a very short, simplistic account of events. Paul Wouters’ PhD, The Citation Culture, has an excellent, detailed account of the creation of the SCI). About that time, Irving H. Sherman and Garfield created the JIF with the intention of using it to select journals for the SCI. The SCI was eventually bought by the Thomson-Reuters giant (TR).
When calculating the JIF, one takes into account the overall number of citations the journal received in a certain year for the two previous years and divides them by the number of items the Journal Citation Report (JCR) considers “citable” and were published that year. TR offer 5-year JIFs as well, but the 2-year JIF is the decisive one.
Example : JIF= (2011 citations to 2010+2009 articles)/(no. of “citable” articles published in 2009+2010)
The JIF wasn’t meant to make comparison across disciplines. That is because every discipline has a different size and different citation behavior (e.g. mathematicians tend to cite less, biologists tend to cite more). The journal Cell has a 2010 JIF of 32.406, while Acta Mathematica, the journal with the highest 2010 JIF in the Mathematics category, has a JIF of 4.864.
Due to limited resources, the JCR covers about 8,000 science and technology journals and about 2,650 journals in the social sciences. It’s a large database, but still covers only a fraction of the world’s research journals. If a journal is not in the JCR database, not only all the citations to it are lost, but all the citations articles in that journal give to journals in the database are lost as well. Another coverage problem is that having been created in the US, the JCR has an American and English-language bias.
Manipulating the impact factor
Given the importance of the IF for prestige and subscriptions, it was expected that journals will try to affect it.
In 1997, the Journal Leukemia was caught red-handed trying to boost its JIF by asking authors to cite more Leukemia articles. This is a very crude (but if they wouldn’t have gotten caught, very effective) method of increasing the JIF. Journal self-citations can be completely legitimate – if one publishes in a certain journal, it makes sense said journal published other articles about the same subject –when done on purpose, however, it’s less than kosher, and messes with the data (if you want to stay on an information scientist’s good side, do NOT mess with the data !). Part of the reason everyone has been trying to find alternatives to the JIF is that it’s so susceptible to manipulations (and that finding alternatives has become our equivalent of sport).
A better method to improve the JIF is to eliminate sections of the journal which publish items the JCR counts as “citable” but are rarely cited. This way the number of citations (the numerator) remains almost the same, but the number of citable items (the denominator) goes down considerably. In 2010, the journal manager and the chair of the journal’s steering committee of The Canadian Field-Naturalist sent a letter to Nature titled “Don’t dismiss journals with low impact factor” where they detailed how the journal’s refusal to eliminate a rarely cited ‘Notes’ section lowered their JIF. The editors can publish more review articles, which are better cited, or publish longer articles, which are usually better cited as well. If the journal is cyberspace-only, they won’t even have to worry about the thickness of the issues. The JIF doesn’t consider letters, editorials, etc. as citable items, but if they are cited the citation is considered as part of the journal’s overall citation count. However, the number of the journal’s citable items remains the same.
The JIF doesn’t have to increase through deliberate manipulation. The journal Acta Crystallographica Section A had rather modest IFs prior to 2009, when its IF went sky-rocketing to 49.926 and even higher in 2010 (54.333). For comparison, Nature’s 2010 IF is 36.104. The rise of the IF happened after a paper called “A short history of SHELX” was published by the journal in January 2008, and was cited 26,281 times since then (all data is from Web of Knowledge and were retrieved on May 2012). The article abstract says : “This paper could serve as a general literature citation when one or more of the open-source SHELX programs (and the Bruker AXS version SHELXTL) are employed in the course of a crystal-structure determination.”
All this doesn’t mean that the JIF isn’t a valid index, or that it has to be discarded, but it does mean it has to be used with caution and in combination with other indices as well as peer reviews.
Note : I assumed the writers of the The Canadian Field-Naturalist letter were the journal’s editors, which turned out to be a wrong assumption (see below comment by Jay Fitzsimmons). I fixed the post accordingly.
Note 2 : My professor, Judit Bar-Ilan, read through the post and noted two mistakes – first, the JIF, of course, is calculated by dividing the citations for the two previous years by the items of the year after, and not the way I wrote it. Second, while the first volumes of the SCI contained citations to 1961 articles, they were published in 1964 and not in 1961. I apologize for the mistakes.
Bar-Ilan, J. (2012). Journal report card Scientometrics DOI : 10.1007/s11192-012-0671-3
Fitzsimmons J.M. & Skevington, J.H. (2010). Metrics : don’t dismiss journals with a low impact factor. Nature, 466, 179.
Garfield, E. (2006). The history and meaning of the journal impact factor. JAMA-Journal of the American Medical Association, 295(1), 90-93.
Seglen, P.O. (1997). Why the Impact Factor of journals should not Be used for evaluating research. British Medical Journal 314, 498–502.
Wouters, P. (1999). The citation culture. Unpublished Ph.D. thesis, University of Amsterdam, Amsterdam.
Despite its many faults (see part I), the Journal Impact Factor (JIF) is considered an influential index to a journal’s quality, and publishing in high-impact journals is essential to a researcher’s academic career.
Reminder : to calculate, for example, the 2010 JIF for a journal -
JIF= (2010 citations to 2009+2008 articles)/(no. of “citable” articles published in 2008+2009)
The JIF did start as a tool helping librarians with subscription decisions, but its influence among authors, readers and editors has increased with time, and so has the scientific community’s interest. More and more papers have been written about the JIF throughout the last thirty years (graph 1).
- Graph 1 : Number of papers on the JIF indexed in Web of Science, 1963–2006. (Archambault & Lariviere, 2009).
Different field, different JIF
JIFs vary widely with the discipline. Journals dealing specialized or applied areas will have, on average, lower JIFs than those in pure or fundamental areas (graph 2). The average number of article references correlates with the citation impact of each field. Biochemistry articles, for example, have twice as much citation than mathematics articles.
JIFs also correlate with the number of authors per article, because the more authors an article has, the better the chances it’ll be self-cited. A study of Lancet articles found that even among articles published in the same journal, the most-cited articles had on average 3-5 times more authors than the least cited articles. So the social sciences, with about two authors per article, have less citation impact than fundamental life sciences, with more than four authors per article. The arts and humanities journals’ JIFs are quite pitiful, because scholars in those fields rarely cited journal articles. The highest 2010 JIF in the JCR Cultural Studies category is 0.867.
- Graph 2 : Subject Variation in Impact Factors (Amin & Mabe, 2007)
TR recently launched a new product, the Book Citation Index. It currently covers 30,000 books from publication year 2005 onwards, and 10,000 new books will be added every year. Of course, that means that all the book citations prior to 2005 will still go unnoticed, but at least it’s better than nothing, and we might finally see a bit of humanities coverage.
Drowning in a one-meter-deep (on average) pool.
When researchers publish in high-impact journals, even if their own articles are rarely cited, or not at all, they still enjoy the journals’ prestige. It also works the other way around : a well-cited article can make JIFs considerably higher, especially in small journals. A study done on three biochemistry journals showed that 50% of the journals citations came from the 15% top-cited articles, and that the top half of the most cited articles were cited ten times as much as the lower half. Articles can be published in the same journal and have a completely different scientific impact (as measured by citations, of course).
The two-year citation window
The regular citation window of the JIF is two years. It favors fast-moving fields, where articles are cited quickly but also obsolesce fast. Journals in slower-moving fields, where citations don’t accumulate quite as fast, will have higher JIFs in longer time frames. If we look at the graph of the average JIFs for 200 chemistry journals (graph 3) we see that the five-year JIF curve is smoother, while the two-year curve varies widely. It means that journals with different two-year JIFs might have more similar impact over time.
“Letters” journals, where articles are usually short, tend to receive more citations within the two-year window. On the other hand, the accumulation of citations for review journals is slower. However, reviews tend to get so many citations that even the fraction of citations they get in the short citation window give review journals relatively high JIFs. Campanario (2011) compared two-year and five-year JIFs and found that a longer citation window increased the JIFs of about 72% of the journals, but lowered them for about 27%.
- Graph 3 : JIF measurement window fluctuations, 200+ Chemistry Journals (source : Amin & Mabe, 2007)
The debate about whether journal self-citations should be included in calculations of the JIF is an old one. Currently, self-citations aren’t excluded from JIFs, but journals with an exceedingly high level of self -citations are sometimes “punished” and excluded from the index for a while. The rate of journal self-citations changes according to discipline and journal, but in general, it’s about 20%. If we’re talking about a specialized journal, the number might be higher. This is the reason why editors sometimes write editorials with dozens of self-citations…
The JIF is a crude index of a journal’s impact (I won’t go as far as to say quality). It was devised in a certain time in history for certain uses and, well, might have been blown out of proportions. Corrections have been suggested throughout the years, but most of them stayed in bibliometric journals rather than influence the general scientific community. Many researchers see the JIF as a definitive measure, but people have to remember that it’s is only one tool in the box of science measuring indices, and a journal JIF says very little of a single article, or researcher, quality. As Seglen (1997) said “Evaluating scientific quality is a notoriously difficult problem which has no standard solution.”
Amin, M, & Mabe, M (2007). Impact factors : use and abuse. Perspectives in Publishing.
Archambault, E., & Lariviere, V. (2009). History of the journal impact factor :
Contingencies and consequences Scientometrics (79), 635-649 DOI : 10.1007/s11192-007-2036-x
Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research BMJ (314) DOI : 10.1136/bmj.314.7079.497
Kostoff, R. N. (2007). The difference between highly and poorly cited medical articles in the journal Lancet Scientometrics, 72`3, 513-520 DOI : 10.1007/s11192-007-1573-7
Campanario, J. M. (2011). Empirical study of journal impact factors obtained using the classical two-year citation window versus a five-year citation window Scientometrics DOI : 10.1007/s11192-010-0334-1
Vanclay, J.K. (2012). Impact factor : outdated artefact or stepping-stone to journal certification ? Scientometrics DOI : 10.1007/s11192-011-0561-0