quarta-feira, 6 de agosto de 2014

Scientific literature


Stop the deluge of science research

The increasing pace of human discovery is a curse – we need to rethink what it means to publish the results of research


science lab
 Are we publishing too much scientific research? Photograph: Murdo Macleod
The rapid growth of scientific literature is often seen as evidence, if evidence were needed, that the pace of human discovery is accelerating. On the contrary, however, it is becoming a curse – one that requires us to radically rethink what it means to publish the results of research.

Relentlessly – day after day, year after year – scientists are uncovering new facts about the world. If anything, the startling rate at which this happens appears to be increasing, but how would we know if such an impression was true? One way is to look at the rate at which scientific papers are published and these have indeed been appearing at an ever-increasing rates for decades or even centuries.

As reported in a recent paper, the rate of growth of cited (ie somewhat influential) scientific publications has risen from less than 1% before the middle of the 18th century to 2-3% in the first half of the 20th century, and 8-9% today. That last figure is equivalent to a greater than doubling of scientific output every decade. What better testament to human intellectual progress and the power of the scientific method?

Not so fast. For a start, not every publication is equal. When the author Theodore Sturgeon first invoked his famous law ("90% of everything is crap") he was talking about science fiction, but he could equally well have been talking about science. Estimates vary wildly, but probably between a quarter and a third of all research papers in the natural sciences go uncited. A much larger proportion is cited only by their own authors or by one or two others.

This is not necessarily a sign of inadequate or wasteful research, but it should give us pause. It is at least partly a result of the imperative for researchers to publish in order to continue to secure funding and employment, and the accompanying incentive to salami-slice research results so as to secure the maximum number of publications from any given piece of work. This in turn leads to the bane of every scientist's existence: far too many papers to read in far too little time.

This situation is only going to get worse, even in the absence of any genuine increase in scientific activity. One reason is the rise of author-pays open-access publishing. This has the benefit of allowing anyone to read the research in question without paying the publisher, but also has the disadvantage of giving publishers a strong commercial incentive to issue as much content as possible.

In general I am a supporter of open access, but subscription business models at least help to concentrate the minds of publishers on the poor souls trying to keep up with their journals.

As if this were not enough, proponents of open science (including me) are proposing that researchers should start publishing all of their work – complete with full data sets, comprehensive methods, negative results and "failed" experiments.

In an age of bottomless digital information this is technically achievable and has the potential to greatly increase the transparency, impartiality and reproducibility of research – particularly welcome at a time when science is going through something of a crisis of confidence on these fronts. But it hardly promises a more manageable literature for those who are desperately trying to absorb all this information.

The only practical solution is to take a more differentiated approach to publishing the results of research. On one hand funders and employers should encourage scientists to issue smaller numbers of more significant research papers. This could be achieved by placing even greater emphasis on the impact of a researcher's very best work and less on their aggregate activity.

On the other they should require scientists to share all of their results as far as practically possible. But most of these should not appear in the form of traditional scholarly papers, which are too laborious for both the author and the reader to fulfil such a role. Rather, less significant work should be a issued in a form that is simple, standardised and easy for computers to index, retrieve, merge and analyse. Humans would interact with them only when looking for aggregated information on very specific topics.

What would such a publication look like? I don't know exactly, but we can see signs in born-digital data publishing platforms such as Figshare (a portfolio company of Digital Science, which I run), Zenodo (an initiative from CERN, the European particle-physics laboratory) and Dryad (a not-for-profit membership organisation that supports the dissemination of research data).

It is easy to forget that 350 years ago the scientific journal was itself an innovation, enabled in large part by the emergence of the printing press some 200 years earlier. Today our challenge is to create a new kind of scientific publication, internet-enabled and fit for a data-rich digital age.

Timo Hannay is managing director at Digital Science – follow them on Twitter @timohannay and @digitalsci

Nenhum comentário: