ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > Living Room

Peer Review and the Scientific Process

<< < (14/47) > >>

IainB:
More reasoning from Judith Curry:
(Copied below sans embedded hyperlinks/images.)
Has science lost its way?
by Judith Curry
Posted on December 1, 2013 | 405 Comments

“The journals want the papers that make the sexiest claims. And scientists believe that the way you succeed is having splashy papers in Science or Nature — it’s not bad for them if a paper turns out to be wrong, if it’s gotten a lot of attention.” – Michael Eisen
...(Read the rest - here.)

--- End quote ---

IainB:
I just checked and realised I had omitted to mention the date of the quote re Judith Curry's article, above, so have updated it.
The reason I was interested in its date is that, apparently quite by coincidence, the recent Nobel science prizewinner Randy Schekman has, a few days later, said something similar to Curry. In fact, he has gone even further and put his money where his mouth is by declaring a boycott of a few publishers (Nature, Cell, and Science) who seem to form an oligopoly of published "peer-reviewed" science research, deliberately encouraging/publishing many bad/fraudulent science papers and distorting the scientific process in order to boost their circulation figures.

The Guardian article below has it: (my emphasis)
(Copied below, with some embedded hyperlinks, but no images.)
Nobel winner declares boycott of top science journals | Science | The Guardian
Randy Schekman says his lab will no longer send papers to Nature, Cell and Science as they distort scientific process

    Ian Sample, science correspondent
    The Guardian, Monday 9 December 2013 19.42 GMT   

Image: Randy Schekman, centre, at a Nobel prize ceremony in Stockholm.

Leading academic journals are distorting the scientific process and represent a "tyranny" that must be broken, according to a Nobel prize winner who has declared a boycott on the publications.

Randy Schekman, a US biologist who won the Nobel prize in physiology or medicine this year and receives his prize in Stockholm on Tuesday, said his lab would no longer send research papers to the top-tier journals, Nature, Cell and Science.

Schekman said pressure to publish in "luxury" journals encouraged researchers to cut corners and pursue trendy fields of science instead of doing more important work. The problem was exacerbated, he said, by editors who were not active scientists but professionals who favoured studies that were likely to make a splash.

The prestige of appearing in the major journals has led the Chinese Academy of Sciences to pay successful authors the equivalent of $30,000 (£18,000). Some researchers made half of their income through such "bribes", Schekman said in an interview.

Writing in the Guardian, Schekman raises serious concerns over the journals' practices and calls on others in the scientific community to take action.

"I have published in the big brands, including papers that won me a Nobel prize. But no longer," he writes. "Just as Wall Street needs to break the hold of bonus culture, so science must break the tyranny of the luxury journals."

Schekman is the editor of eLife, an online journal set up by the Wellcome Trust. Articles submitted to the journal – a competitor to Nature, Cell and Science – are discussed by reviewers who are working scientists and accepted if all agree. The papers are free for anyone to read.

Schekman criticises Nature, Cell and Science for artificially restricting the number of papers they accept, a policy he says stokes demand "like fashion designers who create limited-edition handbags." He also attacks a widespread metric called an "impact factor", used by many top-tier journals in their marketing.

A journal's impact factor is a measure of how often its papers are cited, and is used as a proxy for quality. But Schekman said it was "toxic influence" on science that "introduced a distortion". He writes: "A paper can become highly cited because it is good science - or because it is eye-catching, provocative, or wrong."

Daniel Sirkis, a postdoc in Schekman's lab, said many scientists wasted a lot of time trying to get their work into Cell, Science and Nature. "It's true I could have a harder time getting my foot in the door of certain elite institutions without papers in these journals during my postdoc, but I don't think I'd want to do science at a place that had this as one of their most important criteria for hiring anyway," he told the Guardian.

Sebastian Springer, a biochemist at Jacobs University in Bremen, who worked with Schekman at the University of California, Berkeley, said he agreed there were major problems in scientific publishing, but no better model yet existed. "The system is not meritocratic. You don't necessarily see the best papers published in those journals. The editors are not professional scientists, they are journalists which isn't necessarily the greatest problem, but they emphasise novelty over solid work," he said.

Springer said it was not enough for individual scientists to take a stand. Scientists are hired and awarded grants and fellowships on the basis of which journals they publish in. "The hiring committees all around the world need to acknowledge this issue," he said.

Philip Campbell, editor-in-chief at Nature, said the journal had worked with the scientific community for more than 140 years and the support it had from authors and reviewers was validation that it served their needs.

"We select research for publication in Nature on the basis of scientific significance. That in turn may lead to citation impact and media coverage, but Nature editors aren't driven by those considerations, and couldn't predict them even if they wished to do so," he said.

"The research community tends towards an over-reliance in assessing research by the journal in which it appears, or the impact factor of that journal. In a survey Nature Publishing Group conducted this year of over 20,000 scientists, the three most important factors in choosing a journal to submit to were: the reputation of the journal; the relevance of the journal content to their discipline; and the journal's impact factor. My colleagues and I have expressed concerns about over-reliance on impact factors many times over the years, both in the pages of Nature and elsewhere."

Monica Bradford, executive editor at Science, said: "We have a large circulation and printing additional papers has a real economic cost … Our editorial staff is dedicated to ensuring a thorough and professional peer review upon which they determine which papers to select for inclusion in our journal. There is nothing artificial about the acceptance rate. It reflects the scope and mission of our journal."

Emilie Marcus, editor of Cell, said: "Since its launch nearly 40 years ago, Cell has focused on providing strong editorial vision, best-in-class author service with informed and responsive professional editors, rapid and rigorous peer-review from leading academic researchers, and sophisticated production quality. Cell's raison d'etre is to serve science and scientists and if we fail to offer value for both our authors and readers, the journal will not flourish; for us doing so is a founding principle, not a luxury."

• This article was amended on 10 December 2013 to include a response from Cell editor Emilie Marcus, which arrived after the initial publication deadline.

--- End quote ---

IainB:
Depressing.
Oh, Never Mind: Top 5 Retracted Science Studies of 2013

Probably speaks volumes about the "reputable" (cough, cough) journals that publish some of this rot peer-reviewed research.

IainB:
It rather looks like someone at the U. of Bristol (UK) is prepared to address the real issue - the elephant in the room - i.e., "increasing concern surrounding the reproducibility of much published research". Some people may find it surprising that this is discussed in the journal Nature. Unfortunately, it is of course behind a paywall.
Bishop Hill mentions it:
_________________________________
The peer review game
Feb 19, 2014 Journals

There is an interesting letter in Nature this week. In-Uck Park of the University of Bristol and his colleagues have adopted something of a game-theoretic approach to try to understand aspects of the peer review process.
The objective of science is to advance knowledge, primarily in two interlinked ways: circulating ideas, and defending or criticizing the ideas of others. Peer review acts as the gatekeeper to these mechanisms. Given the increasing concern surrounding the reproducibility of much published research, it is critical to understand whether peer review is intrinsically susceptible to failure, or whether other extrinsic factors are responsible that distort scientists’ decisions. Here we show that even when scientists are motivated to promote the truth, their behaviour may be influenced, and even dominated, by information gleaned from their peers’ behaviour, rather than by their personal dispositions. This phenomenon, known as herding, subjects the scientific community to an inherent risk of converging on an incorrect answer and raises the possibility that, under certain conditions, science may not be self-correcting. We further demonstrate that exercising some subjectivity in reviewer decisions, which serves to curb the herding process, can be beneficial for the scientific community in processing available information to estimate truth more accurately. By examining the impact of different models of reviewer decisions on the dynamic process of publication, and thereby on eventual aggregation of knowledge, we provide a new perspective on the ongoing discussion of how the peer-review process may be improved.

--- End quote ---

Which is a pretty interesting result, and one which I think will ring true with many readers at BH at least. Here's an excerpt from the conclusions:
Science may ...not be as self-correcting as is commonly assumed, and peer-review models which encourage objectivity over subjectivity may reduce the ability of science to selfcorrect. Although herding among  agents is well understood in cases where the incentives directly reward acting in accord with the crowd (for example, financial markets), it is instructive to see that it can occur when agents (that is, scientists) are motivated by the pursuit of truth, and when gatekeepers (that is, reviewers and editors) exist with the same motivation. In such cases, it is important that individuals put weight on their private signals, in order to be able to escape from herding. Behavioural economic experiments indicate that prediction markets, which aggregate private signals acrossmarket participants, might provide information advantages.Knowledge in scientific research is often highly diffuse, across individuals and groups, and publishing and peer-review models should attempt to capture this.We have discussed the importance of allowing reviewers to express subjective opinions in their recommendations, but other approaches, such as the use of post-publication peer review, may achieve the same end.

--- End quote ---

IainB:
The fraud was apparently first reported in the journal Nature.
(Copied below sans embedded hyperlinks/images.)
Over 100 published science journal articles just gibberish

By Maxim Lott
Published March 01, 2014
FoxNews.com

Image:fake articles in science journals.jpg

Do scientific papers ever seem like unreadable gibberish to you? Well, sometimes they really are.

Some 120 papers published in established scientific journals over the last few years have been found to be frauds, created by nothing more than an automated word generator that puts random, fancy-sounding words together in plausible sentence structures. As a result they have been pulled from the journals that originally published them.

The fake papers are in the fields of computer science and math and have titles such as “Application and Research of Smalltalk Harnessing Based on Game-Theoretic Symmetries”; “An Evaluation of E-Business with Fin”; and “Simulating Flip-Flop Gates Using Peer-to-Peer Methodologies.” The authors of those papers did not respond to requests for comment from FoxNews.com.

This is not the first time nonsense papers have been published.

In 1996, as a test, a physics professor submitted a fake paper to the philosophy journal Social Text. His paper argued that gravity is “postmodern” because it is “free from any dependence on the concept of objective truth.” Yet it was accepted and published.

But how could gibberish end up in respectable science papers? The man who discovered the recent frauds said it showed slipping standards among scientists.

"High pressure on scientists leads directly to too prolific and less meaningful publications," computer scientist Cyril Labbé of Joseph Fourier University in France, told FoxNews.com.

But he has no explanation as to why the journals published meaningless papers.

"They all should have been evaluated by a peer-review process. I've no explanation for them being here. I guess each of them needs an investigation," he said.

The publishers also could not explain it, admitting that the papers “are all nonsense.”

“We are in the process of investigating… [and] taking the papers down as quickly as possible. A placeholder notice will be put up once the papers have been removed. Since we publish over 2,200 journals and 8,400 books annually, this will take some time,” Eric Merkel-Sobotta, a spokesman for the publisher Springer, which published 16 of the fake papers, told FoxNews.com.

The fraud was first reported in the journal Nature.

Labbé has made it his mission to detect fakes, and ironically has published a paper in a Springer journal about how to automatically detect fake papers. He also built a website that detects whether papers are computer generated.                                                                                     

“Our tools are very efficient to detect SCIgen papers and also to detect duplicates and plagiarisms,” Labbé said. SCIgen is the program that generates random papers.

Some professors said that pay rules that base professor salaries on the number of papers they publish may lead to fakes.

“Most schools have merit raise systems of some kind, and a professor’s merit score is affected by his or her success in publishing scholarly papers,” Robert Archibald, a professor of economics at the College of William and Mary, who studies the economics of higher education, told FoxNews.com.

He noted that because other professors may not read the paper, “publishing a paper that was computer-generated might help with merit pay.”

Labbé also said that overly numerical measures might encourage fraud.

“In aiming at measuring science it is perturbing science,” he said.

The author of this piece, Maxim Lott, can be reached on twitter at @maximlott or at [email protected]
--- End quote ---

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version