A "publish or perish" culture in which scientific careers rely on the volume of citations is distorting the results of research, a new study suggests.
An analysis by Daniele Fanelli, of the University of Edinburgh, finds that researchers report more "positive" results for their experiments in US states where academics publish more frequently.
He cites earlier research suggesting that papers are more likely to be accepted by journals if they report positive results that support an experimental hypothesis, producing a bias against "negative" results.
Dr Fanelli, Marie Curie research fellow at Edinburgh's Institute for the Study of Science, Technology and Innovation, sets his study against a context of "growing competition for research funding and academic positions, which, combined with an increasing use of bibliometric parameters to evaluate careers ... pressures scientists into producing 'publishable' results".
ºÚÁϳԹÏÍø
Some subject panels in the forthcoming research excellence framework, which will be used to distribute around ?1.5 billion in annual quality-related research funding in England, will use citation data in their judgements of academics' outputs.
Dr Fanelli's study, published this week in the open-access journal PLoS ONE, states: "In a random sample of 1,316 papers that declared to have 'tested a hypothesis' in all disciplines, outcomes could be significantly predicted by knowing the addresses of the corresponding authors: those based in US states where researchers publish more papers per capita were significantly more likely to report positive results, independently of their discipline, methodology and research expenditure."
ºÚÁϳԹÏÍø
The results "support the hypothesis that competitive academic environments increase not only the productivity of researchers, but also their bias against 'negative' results", it adds.
Dr Fanelli titled the paper, "Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data".
He told Times ºÚÁϳԹÏÍø: "There is quite a longstanding discussion about whether this growing culture of 'publish or perish' in academia is actually distorting the scientific process itself ... My study is the first to try to verify directly this effect in scientific literature across all fields."
Using data from the US National Science Foundation, the proportion of positive results in a random sample of papers published between 2000 and 2007 was set against a measure of "academic productivity" - the number of articles published per doctorate holder in academia in each US state - while "controlling for the effects of per-capita research expenditure".
ºÚÁϳԹÏÍø
Positive results accounted for less than half of the total in Nevada, North Dakota and Mississippi. However, states such as Michigan, Ohio, District of Columbia and Nebraska reported positive results for between 95 and 100 per cent.
The paper argues that negative results most likely "either went completely unpublished or were somehow turned into positive through selective reporting, post-hoc reinterpretation, and alteration of methods, analyses and data".
But does a link between citation rates and positive results necessarily mean the former caused the latter?
Dr Fanelli said: "This is not conclusive evidence. I would call it the starting point in this direction."
ºÚÁϳԹÏÍø
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to °Õ±á·¡¡¯²õ university and college rankings analysis
Already registered or a current subscriber? Login