ºÚÁϳԹÏÍø

Academics cite work ¡®they don¡¯t know particularly well¡¯

Survey also reveals citations have little influence over the content of a paper, raising questions over citation metrics

November 21, 2018
peacock mask
Source: Alamy

Researchers ¡°tend to cite works they are not influenced by and that they do not know particularly well¡±, according to a new study that raises further questions over whether citations should be used to judge the quality of academic work.

Some universities have turned to citations data ¨C such as a scientist¡¯s h-index, a controversial measure partly based on citations per publication ¨C to make hiring and promotion decisions, according to the research, conducted by academics from Harvard University and the University of Chicago.

Using an online survey across six science and humanities fields, the authors set out to discover whether scholars cite work?because of its actual influence on their research ¨C or simply because it ¡°support claims they want to make¡± and is ¡°familiar to the intended audience¡±.

The results revealed that more than 60 per cent of citations were said by respondents to have had merely a ¡°minor¡± or a ¡°very minor influence¡± over their article.

ºÚÁϳԹÏÍø

ADVERTISEMENT

It also emerged that around 40 per cent of cited articles are known only ¡°slightly well¡± or ¡°not well¡± by academics who included them in their papers. Academics were particularly likely to say that they were unfamiliar with the contents of highly cited papers.

In another part of the survey, the researchers found that when it was flagged up that a paper had received many citations, academics rated its quality more highly.

ºÚÁϳԹÏÍø

ADVERTISEMENT

They also found that although researchers do refuse to cite papers that fall below a certain level of quality, ¡°above this threshold, frequency of use is unrelated to quality,¡± they write.

The findings ¡°severely undermine¡± the idea that academics cite high-quality work that influenced them, and should spur a ¡°radical reassessment of the role of citations in evaluative contexts¡±, the authors write.

James Wilsdon, a professor of research policy at the University of Sheffield and an expert on the use of metrics in research evaluation, said that the findings reinforce the ¡°need to handle citation data with care¡±, and to ¡°always place it in context¡±.

But he said that he was not ¡°hugely surprised¡± by the results. ¡°I think most of us recognise that there¡¯s often a performative, strategic or rhetorical dimension to citation ¨C in any given paper, one may cite certain influential or canonical sources, or colleagues, in order to signify a wider appreciation of the field,¡±?he said.

ºÚÁϳԹÏÍø

ADVERTISEMENT

Harriet Barnes, head of policy for higher education and skills at the British Academy, said that researchers were ¡°growing ever more conscious about the limits of citation metrics as proxies for evaluation of quality or impact¡±.

The results of the research, ,?were presented at a conference called Science, Technology and Innovation Indicators in Transition, held in Leiden in September. They are based on data from a pilot survey, with the full results still under analysis.

david.matthews@timeshighereducation.com

POSTSCRIPT:

Print headline: Study: reassess use of citations

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (2)

There are cases that all journal publishers can point to where ¡®cut and paste¡¯, even in pre-electronic days, has led to the same errors from earlier papers (date, volume and issue numbers, page numbers, titling and author name typos etc) being replicated in later citations. There is a culture of ¡®literature review ¡® and referencing ¡®the key thinkers¡¯ that seems to drive this. Did the survey cover any of this?
The paper is interesting and useful, although at its core it sets up a false dichotomy between "normative" and "social constructivist" models, providing evidence against the former but no positive evidence for the latter. It would, I think, have been better to leave it as a criticism of the normative model, and stopping there where the evidence ran out. Additionally, influence on "research choices" would not be the sole reason for citing a paper even by a (slightly broader) normative model, so the results here are highly dependent on the authors' chosen definitions - are these arbitrary? Finally, since it's a pilot, the weak p < 0.1is fine (and surely does tell us something about how scientists evaluate papers) - it will be interesting to see the follow-up.

Sponsored

ADVERTISEMENT