How Junk Citations Have Discredited the Academy: Part 4

Editor’s Note: This piece is part of an ongoing series of articles by Professor Bruce Gilley. To read the other articles in the series, click here.


Just when serious scholars started to worry about the pandemic of junk citations, others were positively promoting them. The “citation justice” movement I discussed in the last installment is one example. Another, which I will discuss here, is less visible: “citation-doping,” which some scholars and academic journals use to boost their citation counts.

Not all citation-doping is seen by its practitioners as unethical. Often, it is unintentional; other times, it is thought to be fair play given the competition for citations. At my host university in Thailand, large banners are hoisted across campus every year to congratulate faculty and graduate students whose work was highly cited in the previous year. With so much at stake, it’s little wonder that inordinate effort goes into boosting, or “doping,” citation counts by means fair or foul.

Author-level doping begins with the simple act of citing one’s own truly remarkable research with alarming frequency. This leads to some comical outcomes, such as the discovery that the hitherto unknown giant of computer science research, Sundarapandian Vaidyanathan of the privately run Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology in Chennai, India, was as highly cited in his field as Nobel laureates. A study of super self-citers found, on closer inspection, that 94% of Vaidyanathan’s citations came from himself or his co-authors, compared to a median self-citation rate across all disciplines of about 13%. The study listed no fewer than 100,000 researchers like Vaidyanathan who had gained more than half of their citations from their biggest fan.

Many claim that men self-cite more than women. Thus, female academics raise their fists with the battle cry to “cite only women.” This claim is untrue, though, according to a study of 1.6 million articles published between 2002 and 2005 in the field of biomedicine. The real reason for the disparity is that more productive scholars have more opportunities and greater visibility, both of which allow for relevant self-citations. Women tend to have shorter, more disrupted careers, which lessens both factors. Women otherwise self-cite just as often as men do.

[Related: “How Junk Citations Have Discredited the Academy: Part 3”]

The more nefarious form of author-level doping involves “citation rings,” or “citation cartels.” Here, a group of scholars colludes to cite one another’s work. In 2014, the publisher Sage retracted 60 articles in the Journal of Vibration and Control, alleging that Peter Chen of National Pingtung University of Education in Taiwan had forged a network of fraudsters (including the Taiwanese Minister of Education, who resigned over the scandal) that was used to peer-review and then, later, cite the works of the cartel members.

More vigilant editors could catch this. So, a newer practice is to sneak in cartel citations in the production phase when the editors are asleep. In this paper from China, the authors added no fewer than 10 citations in a concluding sentence that was not in the pre-print edition. This, according to the gadfly site PubPeer, is known as a “citation delivery vehicle.”

Sometimes, the journal editors themselves are the dopers. So-called “coercive citations” are the academic equivalent of the casting couch, forcing prospective authors to cite the work of the journal editors in order to be considered for publication. Biophysicist Kuo-Chen Chou of Boston was barred from the editorial board of the Journal of Theoretical Biology in 2020 for his quite untheoretical habit of demanding that authors cite his papers (an astounding average of 31 for each aspiring author). Sometimes, he even forced them to add him as a co-author. Not surprisingly, academics now make a point of citing the editors of the journals they submit to in advance. Nothing like a little flattery to grease the wheels of the scientific endeavor.

It’s a small step from author-level doping to journal-level doping. Although journal impact factors are supposed to exclude citations to a journal from that journal itself, many editors still ask authors to lard some references to past articles. The index Journal Citation Reports suspended 329 journals for excessive self-citations between 2007 and 2017.

[Related: “How Junk Citations Have Discredited the Academy: Part 2”]

To skirt the self-citation exclusions, journals collude with sister publications to force authors to cite articles in each other’s journal. Thomson Reuters suspended the journal Applied Clinical Informatics (ACI) for boosting the citation performance of sister journal Methods of Information in Medicine (MIM). An inquiry discovered that 39% of the 2015 citations to MIM came from ACI. Similarly, three Romanian physics journals doped each other’s citation counts, according to a 2016 article by Petr Heneberg of Charles University in Prague.

Often the journal-level dope is injected during the key years when a journal’s impact factor is being calculated, usually the most recent two to five years. As a result, articles older than this become essentially ignored, worsening the bias toward “the latest” thinking, which is already super-charged by the Internet and social media.

Of course, as with author-level doping, some journal-level doping is unintentional. U.S. law school students who edit law reviews, bless their hearts, seem to think that all important research on American law is published in student-edited law reviews. They thus encourage authors to reference little else. It’s endearing, but wrong, according to an analysis by four Israeli scholars.

Citation justice warriors point to such sociological causes as reasons why they need to form their own rival citation cartels for, say, blacks or homosexuals. Citation justice is, in this respect, simply a citation cartel in the open. Academic citations thus devolve into a sort of eighteenth-century naval battle in which various ships under different flags maneuver to gain the headwind while blasting off cannons of citations to scuttle yonder scoundrels.

In the concluding installment of this series, I will consider some solutions to the flotsam and jetsam of academic credibility caused by junk citations.


Image: Adobe Stock

Author

  • Bruce Gilley

    Bruce Gilley is a professor of political science at Portland State University and a member of the board of the National Association of Scholars. In addition to his work on academic freedom and the revival of intellectual pluralism on campus, Dr. Gilley’s research centers on comparative development and politics as well as contemporary public policy issues.

    View all posts

8 thoughts on “How Junk Citations Have Discredited the Academy: Part 4

  1. Some level of self-citation is essential if only to give the reader a background on the intellectual foundation of the work when it builds on prior work by the author. In this time when academia increasingly pushes researchers into ever granular specialization, one can often find researchers doing such siloed work that their own IS the only, or primary relevant background for the work in the paper.

  2. I’d love to see an analysis of how much the citation rate of scholars increases while they hold journal editorships. This has happened twice to me — the first time, one anonymous reviewer said my paper would be “much improved” by citing the work of scholar x. I responded to the review saying I did not think such a citation was relevant, though I took on criticisms offered by other anonymous reviewers in my R & R. My paper was then rejected and a few months later it turned out… the guest editor of the issue, and that anon reviewer, was none other than scholar x.

    In a later experience, a journal editor mentioned “as an aside” he thought a work of his own was relevant to my work, though it had gone unmentioned by the anon reviewers. Older and wiser, I cited his work in my R & R. it was accepted for publication, imagine that.

    1. Kathleen, Wow, sorry for your lived experience! I would have to go back through all the correspondence to see if this has happened to me. I have definitely had the journal-level doping “suggested” but never an editor. Thanks for sharing that.

  3. I don’t quite see what the issue is here — the purpose of a citation is to direct the reader to the source of the information you are presenting and if the source is your prior paper, then you not only should cite yourself but (as I understand it) are required to.

    In theory, one could read your earlier paper and follow *its* sources back to the original information, and some argue that if you merely present those citations again without also citing yourself, you are plagiarizing yourself.

    what is the careful scholar to do?

    1. The only times a citation is required is when you are either quoting someone or copying material verbatim from another source. Failure to do so in such cases is plagiarism. Otherwise citations are optional.

      1. It may vary between fields and the related style guides that are used, but Education uses the APA (American Psychological Association) style guide and I have to also cite both the ideas of others and facts that are not common knowledge. The latter is a more relaxed citation, but you still gotta do it — at least, that’s what I’ve always been told and the style manual appears to indicate.

        STEM may be different

      2. It is also required when you are specifically referencing relative prior work. Copying verbatim or quoting are not the ONLY cases where citation is required. Any reference to a factual assertion not based on the data and results of the study requires direction to the source of the assertion, as does reference to a specific theoretical model developed outside of the paper.

        For example, in a paper I published in the American Journal of Medical Quality regarding a production function for quality, I did not need to reference the theory as the model was mathematically derived by me in the paper. I illustrated its implications with a discussion of the variation of results for Medicare pay-for-performance demonstration projects, and it was necessary to cite the specific papers evaluating the projects in order to refer the reader to the primary source for the data.

        Your statement would be correct in general, non-academic writing. However, part of the goal of the scientific literature is to specifically identify the intellectual foundation of the work, relevant data, and relevant support for the findings outside of the study in question. One is building a chain of evidence to support the intellectual conclusions and contributions of the work.

Leave a Reply

Your email address will not be published. Required fields are marked *