Understanding the Journal Impact Factor

Blog Scientific American | May 7, 2012 | By Hadas Shema
lundi 2 juillet 2012
par  antonin
4 votes

  Sommaire  

 Part One

The journals in which scientists publish can make or break their career. A scientist must publish in “leading” journals, with high Journal Impact Factor (JIF), (you can see it presented proudly on high-impact journals’ websites). The JIF has gone popular partly because it gives an “objective” measure of a journal’s quality and partly because it’s a neat little number which is relatively easy to understand. It’s widely used by academic librarians, authors, readers and promotion committees.

Raw citation counts emerged at the 20′s of the previous century and were used mainly by science librarians who wanted to save money and shelf space by discovering which journals make the best investment in each field. This method had a modest success, but it didn’t gain much momentum until the sixties. That could be because said librarians had to count citations by hand.

In 1955, Eugene Garfield published a paper in Science where he discussed the idea of an Impact Factor based on citations for the first time. By 1964, he and his partners published the Science Citation Index (SCI). (Of course, this is a very short, simplistic account of events. Paul Wouters’ PhD, The Citation Culture, has an excellent, detailed account of the creation of the SCI). About that time, Irving H. Sherman and Garfield created the JIF with the intention of using it to select journals for the SCI. The SCI was eventually bought by the Thomson-Reuters giant (TR).

When calculating the JIF, one takes into account the overall number of citations the journal received in a certain year for the two previous years and divides them by the number of items the Journal Citation Report (JCR) considers “citable” and were published that year. TR offer 5-year JIFs as well, but the 2-year JIF is the decisive one.

Example : JIF= (2011 citations to 2010+2009 articles)/(no. of “citable” articles published in 2009+2010)

The JIF wasn’t meant to make comparison across disciplines. That is because every discipline has a different size and different citation behavior (e.g. mathematicians tend to cite less, biologists tend to cite more). The journal Cell has a 2010 JIF of 32.406, while Acta Mathematica, the journal with the highest 2010 JIF in the Mathematics category, has a JIF of 4.864.

Due to limited resources, the JCR covers about 8,000 science and technology journals and about 2,650 journals in the social sciences. It’s a large database, but still covers only a fraction of the world’s research journals. If a journal is not in the JCR database, not only all the citations to it are lost, but all the citations articles in that journal give to journals in the database are lost as well. Another coverage problem is that having been created in the US, the JCR has an American and English-language bias.

Manipulating the impact factor

Given the importance of the IF for prestige and subscriptions, it was expected that journals will try to affect it.

In 1997, the Journal Leukemia was caught red-handed trying to boost its JIF by asking authors to cite more Leukemia articles. This is a very crude (but if they wouldn’t have gotten caught, very effective) method of increasing the JIF. Journal self-citations can be completely legitimate – if one publishes in a certain journal, it makes sense said journal published other articles about the same subject –when done on purpose, however, it’s less than kosher, and messes with the data (if you want to stay on an information scientist’s good side, do NOT mess with the data !). Part of the reason everyone has been trying to find alternatives to the JIF is that it’s so susceptible to manipulations (and that finding alternatives has become our equivalent of sport).

A better method to improve the JIF is to eliminate sections of the journal which publish items the JCR counts as “citable” but are rarely cited. This way the number of citations (the numerator) remains almost the same, but the number of citable items (the denominator) goes down considerably. In 2010, the journal manager and the chair of the journal’s steering committee of The Canadian Field-Naturalist sent a letter to Nature titled “Don’t dismiss journals with low impact factor” where they detailed how the journal’s refusal to eliminate a rarely cited ‘Notes’ section lowered their JIF. The editors can publish more review articles, which are better cited, or publish longer articles, which are usually better cited as well. If the journal is cyberspace-only, they won’t even have to worry about the thickness of the issues. The JIF doesn’t consider letters, editorials, etc. as citable items, but if they are cited the citation is considered as part of the journal’s overall citation count. However, the number of the journal’s citable items remains the same.

The JIF doesn’t have to increase through deliberate manipulation. The journal Acta Crystallographica Section A had rather modest IFs prior to 2009, when its IF went sky-rocketing to 49.926 and even higher in 2010 (54.333). For comparison, Nature’s 2010 IF is 36.104. The rise of the IF happened after a paper called “A short history of SHELX” was published by the journal in January 2008, and was cited 26,281 times since then (all data is from Web of Knowledge and were retrieved on May 2012). The article abstract says : “This paper could serve as a general literature citation when one or more of the open-source SHELX programs (and the Bruker AXS version SHELXTL) are employed in the course of a crystal-structure determination.”

Acta Crystallographica Section A Journal Impact Factor, Years 2006-2010

PNG - 6.2 ko

All this doesn’t mean that the JIF isn’t a valid index, or that it has to be discarded, but it does mean it has to be used with caution and in combination with other indices as well as peer reviews.

Note : I assumed the writers of the The Canadian Field-Naturalist letter were the journal’s editors, which turned out to be a wrong assumption (see below comment by Jay Fitzsimmons). I fixed the post accordingly.

Note 2 : My professor, Judit Bar-Ilan, read through the post and noted two mistakes – first, the JIF, of course, is calculated by dividing the citations for the two previous years by the items of the year after, and not the way I wrote it. Second, while the first volumes of the SCI contained citations to 1961 articles, they were published in 1964 and not in 1961. I apologize for the mistakes.

References/further reading

Bar-Ilan, J. (2012). Journal report card Scientometrics DOI : 10.1007/s11192-012-0671-3

Fitzsimmons J.M. & Skevington, J.H. (2010). Metrics : don’t dismiss journals with a low impact factor. Nature, 466, 179.

Garfield, E. (2006). The history and meaning of the journal impact factor. JAMA-Journal of the American Medical Association, 295(1), 90-93.

Seglen, P.O. (1997). Why the Impact Factor of journals should not Be used for evaluating research. British Medical Journal 314, 498–502.

Wouters, P. (1999). The citation culture. Unpublished Ph.D. thesis, University of Amsterdam, Amsterdam.

http://thomsonreuters.com/products_...

 Part two

June 24, 2012

Despite its many faults (see part I), the Journal Impact Factor (JIF) is considered an influential index to a journal’s quality, and publishing in high-impact journals is essential to a researcher’s academic career.

Reminder : to calculate, for example, the 2010 JIF for a journal -

JIF= (2010 citations to 2009+2008 articles)/(no. of “citable” articles published in 2008+2009)

The JIF did start as a tool helping librarians with subscription decisions, but its influence among authors, readers and editors has increased with time, and so has the scientific community’s interest. More and more papers have been written about the JIF throughout the last thirty years (graph 1).

PNG - 17.6 ko
Graph 1 : Number of papers on the JIF indexed in Web of Science, 1963–2006. (Archambault & Lariviere, 2009).

Different field, different JIF

JIFs vary widely with the discipline. Journals dealing specialized or applied areas will have, on average, lower JIFs than those in pure or fundamental areas (graph 2). The average number of article references correlates with the citation impact of each field. Biochemistry articles, for example, have twice as much citation than mathematics articles.

JIFs also correlate with the number of authors per article, because the more authors an article has, the better the chances it’ll be self-cited. A study of Lancet articles found that even among articles published in the same journal, the most-cited articles had on average 3-5 times more authors than the least cited articles. So the social sciences, with about two authors per article, have less citation impact than fundamental life sciences, with more than four authors per article. The arts and humanities journals’ JIFs are quite pitiful, because scholars in those fields rarely cited journal articles. The highest 2010 JIF in the JCR Cultural Studies category is 0.867.

PNG - 46.4 ko
Graph 2 : Subject Variation in Impact Factors (Amin & Mabe, 2007)

TR recently launched a new product, the Book Citation Index. It currently covers 30,000 books from publication year 2005 onwards, and 10,000 new books will be added every year. Of course, that means that all the book citations prior to 2005 will still go unnoticed, but at least it’s better than nothing, and we might finally see a bit of humanities coverage.

Drowning in a one-meter-deep (on average) pool.

When researchers publish in high-impact journals, even if their own articles are rarely cited, or not at all, they still enjoy the journals’ prestige. It also works the other way around : a well-cited article can make JIFs considerably higher, especially in small journals. A study done on three biochemistry journals showed that 50% of the journals citations came from the 15% top-cited articles, and that the top half of the most cited articles were cited ten times as much as the lower half. Articles can be published in the same journal and have a completely different scientific impact (as measured by citations, of course).

The two-year citation window

The regular citation window of the JIF is two years. It favors fast-moving fields, where articles are cited quickly but also obsolesce fast. Journals in slower-moving fields, where citations don’t accumulate quite as fast, will have higher JIFs in longer time frames. If we look at the graph of the average JIFs for 200 chemistry journals (graph 3) we see that the five-year JIF curve is smoother, while the two-year curve varies widely. It means that journals with different two-year JIFs might have more similar impact over time.

“Letters” journals, where articles are usually short, tend to receive more citations within the two-year window. On the other hand, the accumulation of citations for review journals is slower. However, reviews tend to get so many citations that even the fraction of citations they get in the short citation window give review journals relatively high JIFs. Campanario (2011) compared two-year and five-year JIFs and found that a longer citation window increased the JIFs of about 72% of the journals, but lowered them for about 27%.

PNG - 42.1 ko
Graph 3 : JIF measurement window fluctuations, 200+ Chemistry Journals (source : Amin & Mabe, 2007)

Self-citations

The debate about whether journal self-citations should be included in calculations of the JIF is an old one. Currently, self-citations aren’t excluded from JIFs, but journals with an exceedingly high level of self -citations are sometimes “punished” and excluded from the index for a while. The rate of journal self-citations changes according to discipline and journal, but in general, it’s about 20%. If we’re talking about a specialized journal, the number might be higher. This is the reason why editors sometimes write editorials with dozens of self-citations…

In conclusion

The JIF is a crude index of a journal’s impact (I won’t go as far as to say quality). It was devised in a certain time in history for certain uses and, well, might have been blown out of proportions. Corrections have been suggested throughout the years, but most of them stayed in bibliometric journals rather than influence the general scientific community. Many researchers see the JIF as a definitive measure, but people have to remember that it’s is only one tool in the box of science measuring indices, and a journal JIF says very little of a single article, or researcher, quality. As Seglen (1997) said “Evaluating scientific quality is a notoriously difficult problem which has no standard solution.”

References

Amin, M, & Mabe, M (2007). Impact factors : use and abuse. Perspectives in Publishing.

Archambault, E., & Lariviere, V. (2009). History of the journal impact factor :

Contingencies and consequences Scientometrics (79), 635-649 DOI : 10.1007/s11192-007-2036-x

Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research BMJ (314) DOI : 10.1136/bmj.314.7079.497

Kostoff, R. N. (2007). The difference between highly and poorly cited medical articles in the journal Lancet Scientometrics, 72`3, 513-520 DOI : 10.1007/s11192-007-1573-7

Campanario, J. M. (2011). Empirical study of journal impact factors obtained using the classical two-year citation window versus a five-year citation window Scientometrics DOI : 10.1007/s11192-010-0334-1

Vanclay, J.K. (2012). Impact factor : outdated artefact or stepping-stone to journal certification ? Scientometrics DOI : 10.1007/s11192-011-0561-0


Read on Scientific American blog

A lire également :

Les indicateurs de l’évaluation de la recherche : de l’impact factor à l’h-index de par Laurence Bianchini sur le Blog de MyScienceWork :

Evaluer la qualité de la recherche scientifique est essentiel dans une société où l’innovation et les progrès techniques dépendent en partie de la recherche académique. Les indicateurs tels que l’impact factor, l’eigenfactor ou l’h-index, jouent un rôle utile dans ce processus. Mais ils ne peuvent être l’unique paramètre d’évaluation. Ils sont très souvent utilisés de manière erronée. Il est donc important de comprendre comment ils sont calculés et de connaître leurs limites et les alternatives. (Lire la suite ....)



Commentaires

Agenda

<<

2014

 

<<

Septembre

 

Aujourd'hui

LuMaMeJeVeSaDi
1234567
891011121314
15161718192021
22232425262728
293012345
Aucun évènement à venir les 2 prochains mois

Brèves

Calendrier de la campagne 2013 de qualification universitaire

lundi 9 juillet 2012

Sur le site du ministère :

Chaque candidat doit remplir un dossier pour chacun des deux rapporteurs désignés. Ce dossier comprend :

les pièces obligatoires précisées dans l’article 4 de l’arrêté du 16 juillet 2009 modifié par arrêté du 20 août 2010 des documents complémentaires exigés par les différentes sections du CNU. Ils seront communiqués ultérieurement.

La validité de la qualification est appréciée à la date de clôture des inscriptions au concours ouvert pour chaque emploi. Les candidats à la qualification ne peuvent pas se porter candidats sur les postes dont le dépôt de candidatures serait clos avant la date de prise d’effet de leur qualification

Nouvelle politique des Préfectures sur le titre de séjours scientifiques

samedi 19 février 2011

Voici une petite réactualisation concernant les titres de séjour scientifiques transmise par Sophie Gerber (INRA) :

La Préfecture a changé de politique concernant la durée des titres de séjour scientifiques. Elle ne délivrera plus automatiquement des titres d’un an comme elle le faisait jusque là indifféremment pour les séjours inférieurs et supérieurs à un an. Dorénavant, dans le cas d’un renouvellement de titre de séjour scientifique, la durée inscrite sur la convention sera prise en compte. Ainsi si la convention dure plus d’un an le chercheur bénéficiera d’un titre pluriannuel mais si la convention dure moins d’un an, le titre de séjour expirera le même jour que la fin de la convention. J’attire votre attention sur ce dernier point qui peut s’avérer problématique pour les chercheurs.

A titre d’exemple, une convention d’accueil d’une durée de six mois donnera droit à un titre de séjour de six mois et non plus d’un an, il faudra donc anticiper les renouvellements de contrat plus de deux mois en avance sous peine de devoir recommencer la procédure de demande de convention d’accueil et de renouvellement de titre de séjour tous les six mois. Je vous rappelle qu’une convention d’accueil peut couvrir plusieurs contrats successifs et peut permettre à un chercheur de consacrer son temps et son énergie à ses recherches sans avoir à courir après sa nouvelle convention d’accueil tous les trois mois.

De plus à chaque renouvellement de sa carte de séjour, un scientifique doit s’acquitter d’une taxe OMI de 110 euros.

Recrutement 2011 de PRAG / PRCE

jeudi 2 décembre 2010

[Blog Histoires d’universités | 30/11/2010 | par Pierre Dubois]

À quoi aboutissent deux modes de gestion des ressources humaines, celui des universités autonomes passées aux “responsabilités et compétences élargies” et celui centralisé du ministère de l’Éducation nationale ? A une situation parfaitement ubuesque. La preuve : la procédure de recrutement des PRAG et des PRCE dans les universités pour l’année 2011 est lancée. On se dit que son calendrier et ses différentes étapes vont forcément coincer ici ou là.

(...)

Listes de qualifications : précision sur leur expiration

vendredi 29 octobre 2010

Depuis le décret n°2009-460, « La liste de qualification cesse d’être valable à l’expiration d’une période de quatre années à compter du 31 décembre de l’année de l’inscription sur la liste de qualification. » Ainsi, si vous avez été qualifié en janvier ou février 2007, vous restez qualifié jusqu’au 31/12/2011. Vous pouvez ainsi postuler sur les postes publiés au fil de l’eau en 2011 sans redemander une nouvelle qualification.

Campagne de recrutement ATER 2010-2011 à Paris 8

lundi 3 mai 2010

Campagne ouverte du 28 avril 2010 au 20 mai 2010 inclus

  • Université de Paris 8 à Saint Denis Des postes sont susceptibles d’être vacants dans les sections suivantes : 01, 02, 03, 05, 07, 11, 12, 16, 18, 23, 25 - 26, 27, 61, 70, 71.
  • Institut technologique de Montreuil
    Sections 61 et 27.
  • Institut Technologique de Tremblay
    Pour les sections suivantes : 71 et 11.

Pour plus d’informations et télécharger le dossier de candidature pour l’année universitaire 2010-2011

http://www.univ-paris8.fr - rubrique : enseignants – ATER

Soutenir par un don