Home | Blog | Software | Reviews and Features | Forum | Help | Donate | About us
topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • December 03, 2016, 12:58:48 AM
  • Proudly celebrating 10 years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Last post Author Topic: Peer Review and the Scientific Process  (Read 56693 times)

Renegade

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 13,220
  • Tell me something you don't know...
    • View Profile
    • Renegade Minds
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #50 on: October 16, 2013, 11:15:43 AM »
Here's a fun site:

http://lesswrong.com/

Quote
About Less Wrong

Interested in improving your reasoning and decision-making skills? Then you've come to the right place.

Less Wrong is a large, active website for people who try to think rationally. To get a quick idea of why rationality is important and how to develop it, try reading Your Intuitions Are Not Magic, The Cognitive Science of Rationality, or What I've Learned From Less Wrong.

It's kind of off-topic, but still relevant to the general topic of reasoning and logic.
Slow Down Music - Where I commit thought crimes...

Freedom is the right to be wrong, not the right to do wrong. - John Diefenbaker

IainB

  • Supporting Member
  • Joined in 2008
  • **
  • Posts: 6,136
  • Slartibartfarst
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #51 on: October 16, 2013, 09:13:22 PM »
Here's a fun site:
http://lesswrong.com/
Quote
About Less Wrong
Interested in improving your reasoning and decision-making skills? Then you've come to the right place.
Less Wrong is a large, active website for people who try to think rationally. To get a quick idea of why rationality is important and how to develop it, try reading Your Intuitions Are Not Magic, The Cognitive Science of Rationality, or What I've Learned From Less Wrong.
___________________
It's kind of off-topic, but still relevant to the general topic of reasoning and logic.
I would suggest, given our natural human irrationality, that it is not off-topic at all.
We have to learn to use rational-critical thinking. It's not something we are born with, but a skill that we have to learn - like riding a bike. That's why they started teaching it as an "O" (Ordinary) Level syllabus in UK secondary schools some years back (better late than never). They found - perhaps unsurprisingly - that it was definitely a transferable skill and that it helped students to not only improve their grades in other "O" Level subjects, but also to be able to better cope with university 101 material.

I gathered from my reading that the idea for introducing it to secondary schools was partly because the results of tests on student intake to universities showed that they lacked (amongst other things) rational-critical thinking skills - so, to work around the problem, universities started teaching it as part of entrance foundation courses at university and then later addressed the problem directly by shifting the rational-critical thinking training to secondary level. That way, all children could thus benefit, whether they went on to university or not, and, as I noted above, they found that it was a transferable skill that benefited secondary student grades on other "O" Level subjects.

You arguably could not have a rational discussion about Peer Review and the Scientific Process if you were not employing critical thinking - i.e., reasoning and logic.
I have used that site you point to (http://lesswrong.com/) quite a bit, to check/help improve my own reasoning skills, and have pointed other people (including my then 11 y/o daughter) to it as well. It's rather useful.

To understand a deeper potential significance of this, consider The Parable of the Talents (Matthew 25:14-30; Luke 19:12-28). In this case, the Intellect is one of our servants.
Why would we deliberately continue to squander, cripple or imprison our intellects in ignorance, shuttering it up, uneducated, in a dark box, throughout the duration of our lives, when once we can understand this simple truth: that everything of ourselves has been given to us - a gift of Life - and that it is up to us to make the fullest use of our gifts, and that it is never too late to start?

As I wrote above:
Quote
"However, the depressing reality seems too often to be that many people are so unable to think rationally for themselves that they seem gullible to this kind of barrage of logical fallacy. One's head would be full of a confusing and probably conflicting mass of invalid premises, with ergo no real knowledge or understanding of truth."

This is a very old idea and the stuff of wisdom. Fiat lux - literally, "Let there be light".
Quote
From the third verse of the Book of Genesis. In the King James Bible, it reads, in context:
  • 1:1 - In the beginning God created the heaven and the earth.
  • 1:2 - And the earth was without form, and void; and darkness was upon the face of the deep. And the Spirit of God moved upon the face of the waters.
  • 1:3 - And God said, Let there be light: and there was light.
  • 1:4 - And God saw the light, and it was good; and God divided the light from the darkness.

I would prefer to exist in the light, and am still working on it.

Renegade

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 13,220
  • Tell me something you don't know...
    • View Profile
    • Renegade Minds
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #52 on: October 16, 2013, 09:27:24 PM »
We have to learn to use rational-critical thinking. It's not something we are born with, but a skill that we have to learn - like riding a bike.

There are a few skills that are innate (a priori).

A (Given)
A --> B (Given)
-------
B (Follows)

A & -A (False)

etc.

But beyond a few simple things like that, well, nope. They need to be learned.

To understand a deeper potential significance of this, consider The Parable of the Talents (Matthew 25:14-30; Luke 19:12-28). In this case, the Intellect is one of our servants.
Why would we deliberately continue to squander, cripple or imprison our intellects in ignorance, shuttering it up, uneducated, in a dark box, throughout the duration of our lives, when once we can understand this simple truth: that everything of ourselves has been given to us - a gift of Life - and that it is up to us to make the fullest use of our gifts, and that it is never too late to start?

+1

Though it is very hard for a lot of people to go back to learning.

And logic skills do get rusty. It's always useful to brush up. I know I need it in some areas.


I would prefer to exist in the light, and am still working on it.

+1

Harder than it sounds. :( Much harder. Fortunately a lot of things get easier once you set yourself on the right path.
Slow Down Music - Where I commit thought crimes...

Freedom is the right to be wrong, not the right to do wrong. - John Diefenbaker

IainB

  • Supporting Member
  • Joined in 2008
  • **
  • Posts: 6,136
  • Slartibartfarst
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #53 on: October 17, 2013, 09:28:00 PM »
...Though it is very hard for a lot of people to go back to learning. ...
If one operated on the basis that one did all one's schoolin' an' learnin' when one was young, and that's over now, then that might implicitly assume that one will not learn anything new from that point onwards.
That looks like a false premise to me. The human mind is an adaptable learning machine. Sure, if one unconsciously "turned it off" at (say) age 25 or so, then it might feel a bit rusty to make the effort now, but it doesn't necessarily preclude one's learning something new. I reckon that intellectual laziness probably enters into it as well.
I've always been an information junkie and what shocks me is how ignorant I still am and how much more there is to learn/understand/experience. A single lifetime won't have been long enough.

IainB

  • Supporting Member
  • Joined in 2008
  • **
  • Posts: 6,136
  • Slartibartfarst
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #54 on: October 18, 2013, 02:58:10 AM »
I had thought that zoology (the scientific study of the behaviour, structure, physiology, classification, and distribution of animals of a particular region or geological period) used well-established, tried-and-tested scientific processes to arrive at its conclusions. Well, that may be so, but it apparently doesn't stop the BBC broadcasting staggeringly misleading content in a "documentary" on the subject: Mythical Attenborough Fail

I always reckoned that David Attenborough's work was the absolute last word in factual natural history documentaries, but this is the second instance I have read about/seen where one of his proggies was seriously "off" in the science department.
Look at the complaint:
Quote
Complaint Summary: ‘Tree of Life’ false, misleading and non-scientific
Full Complaint: The programme makes extensive use of a ‘Tree of Life’ pictorial device depicting species as branches on a tree, with the vertical dimension showing time. All thousands of branches are continuous and ultimately end up together in the present time. This is false, because we all know most species died out long ago (so the vast majority of branches shouldn’t reach the present). It is also misleading, as the viewer will think the present time is much richer in species than the past. It is finally non-scientific, using an antiquated metaphor long ago disproven by the likes of Stephen Jay Gould. Please insert a correction/disclaimer at the beginning of future broadcasts and for the rest of the first showing of the series.
__________________________
Maybe the BBC is in the vanguard of the Post-Modern Science movement (aka "made up Science") that I posted about here: Re: Peer Review and the Scientific Process (in Post-Modern Science ).

Renegade

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 13,220
  • Tell me something you don't know...
    • View Profile
    • Renegade Minds
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #55 on: October 20, 2013, 10:16:03 AM »
Here's an interesting bit on bias:

http://grist.org/pol...-ability-to-do-math/

Quote
Everybody knows that our political views can sometimes get in the way of thinking clearly. But perhaps we don’t realize how bad the problem actually is. According to a new psychology paper, our political passions can even undermine our very basic reasoning skills. More specifically, the study finds that people who are otherwise very good at math may totally flunk a problem that they would otherwise probably be able to solve, simply because giving the right answer goes against their political beliefs.

The study, by Yale law professor Dan Kahan and his colleagues, has an ingenious design. At the outset, 1,111 study participants were asked about their political views and also asked a series of questions designed to gauge their “numeracy,” that is, their mathematical reasoning ability. Participants were then asked to solve a fairly difficult problem that involved interpreting the results of a (fake) scientific study. But here was the trick: While the fake study data that they were supposed to assess remained the same, sometimes the study was described as measuring the effectiveness of a “new cream for treating skin rashes.” But in other cases, the study was described as involving the effectiveness of “a law banning private citizens from carrying concealed handguns in public.”

More at the link.
Slow Down Music - Where I commit thought crimes...

Freedom is the right to be wrong, not the right to do wrong. - John Diefenbaker

IainB

  • Supporting Member
  • Joined in 2008
  • **
  • Posts: 6,136
  • Slartibartfarst
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #56 on: October 20, 2013, 05:05:01 PM »
^^ Nice find, but watch that space. In the US it will almost inevitably be used/abused to attempt to irrationally load one's argument in favour of the proponents of this or that religio-political ideology, and ad hom the proponents of contradictory religio-political ideologies.

In any event, there's little new about it. Edward De Bono had already identified the problem years ago in his book "Teaching Thinking" - he called it "intellectual deadlock", and pointed out how it effectively disables one from developing/using thinking skills.

He also pointed out in that and/or a later book that the practice of "taking positions" in debate was likely to be one of the single greatest inhibiting factors to our development, leading to wars and holding back man's evolution. You can substitute "adopting a religio-political ideology" for "taking positions". It sets one's paradigms rock solid so that - regardless of verifiable observational evidence - you can't see or think with any other so-called "truth" (belief/dogma) except that which your paradigm allows you to see.

Now try and prove that peer review can actually add any objective truth to or objectively validate any part of the scientific process.
I predict it is likely to be impossible.
If we are interested in Truth, and if we wish to be something more than unthinking parrots reciting some moronic dogma of a religio-political ideology (system of belief) for most of our lives (which I would argue is realising at best only a sub-human potential), then it seems that one has to fall back on "Nullius in verba/verbo." Motto of the Royal Society, London. Literally, "Take nobody's word for it; see for yourself".
If you need an example of what I mean, watch this depressing piece of video footage of an interview, and weep: (I only came across this by accident yesterday)



Though it is from 2009, it is just as significant today as it was then.
The interviewer, who is evidently well-informed, just stands there politely asking simple, very factual questions based on independently verifiable data (no trick questions), and the interviewee - who should be well-informed - answers them to the best of her ability.
I felt acutely embarrassed for her.

Renegade

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 13,220
  • Tell me something you don't know...
    • View Profile
    • Renegade Minds
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #57 on: October 20, 2013, 06:54:40 PM »
^^ Nice find, but watch that space. In the US it will almost inevitably be used/abused to attempt to irrationally load one's argument in favour of the proponents of this or that religio-political ideology, and ad hom the proponents of contradictory religio-political ideologies.

Yes. Someone will come up with some skewed "study", then use that to badger people into accepting their skewed conclusion.

Dressing pigs (fraud) up in pink dresses (as science) seems to be far too common.

In a few cases, you can trace problems back to deeper problems in science where accepted science simply doesn't pan out the way it is purported to.

He also pointed out in that and/or a later book that the practice of "taking positions" in debate was likely to be one of the single greatest inhibiting factors to our development, leading to wars and holding back man's evolution. You can substitute "adopting a religio-political ideology" for "taking positions". It sets one's paradigms rock solid so that - regardless of verifiable observational evidence - you can't see or think with any other so-called "truth" (belief/dogma) except that which your paradigm allows you to see.

One of the nice things about "debate" is that they are set up as FOR/AGAINST, which entirely misses the point about finding some sort of truth for most topics. The topic itself needs to naturally fit into that or else you're simply left with a false dilemma, which again, is perfect for the religio-political zealots.


If we are interested in Truth, and if we wish to be something more than unthinking parrots reciting some moronic dogma of a religio-political ideology (system of belief) for most of our lives (which I would argue is realising at best only a sub-human potential), then it seems that one has to fall back on "Nullius in verba/verbo." Motto of the Royal Society, London. Literally, "Take nobody's word for it; see for yourself".

This is far more difficult than it sounds at first glance.

At some point we need to trust someone else to have studied a topic properly, and to have presented the evidence objectively and fairly.

Like you mentioned elsewhere - one lifetime is not nearly enough.


I felt acutely embarrassed for her.

Oh good grief...

"We're on different planets."

Yes. You are on a different planet.

"I have a job and I don't have time to check..."

Well, there's a large part of the problem. In a world where 2 incomes are often needed for a family to survive, it isn't really justifiable to blame people for prioritizing putting food on the table and trusting so-called "experts" vs. verifying data themselves.

Then again, if you don't have the time to check at least at a basic level, you might want to consider just keeping your trap shut.
Slow Down Music - Where I commit thought crimes...

Freedom is the right to be wrong, not the right to do wrong. - John Diefenbaker

barney

  • Charter Member
  • Joined in 2006
  • ***
  • Posts: 1,282
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #58 on: October 21, 2013, 04:40:23 AM »
Well, there's a large part of the problem. In a world where 2 incomes are often needed for a family to survive, it isn't really justifiable to blame people for prioritizing putting food on the table and trusting so-called "experts" vs. verifying data themselves.

Then again, if you don't have the time to check at least at a basic level, you might want to consider just keeping your trap shut.

Been following this thread for a while, but I must admit to a[n apparent] total lack of understanding.  Since any review is a peer review (some peers being greater (or lesser) than others, of course, and some being councils rather than individuals), we must accept such reviews at face value or perform the reviewed activity/action/report ourselves.  Shamefully, I'm not qualified to review certain nuclear/bio-molecular/physical reports myself (missed those particular days in class, donchano), but, being of [somewhat] rational mind, I do have the capacity of judgement according to mine own particular/weird assumptions and assessments.

That condition, presumably, exists in all reasoning beings - at least to some extent.

Hence, what is the problem with peer review, per se?  Bogus judgements?  See 'em all the time.  This seems nothing more than a complaint against human nature.

xtabber

  • Supporting Member
  • Joined in 2007
  • **
  • Posts: 572
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #59 on: October 22, 2013, 12:58:27 PM »
Peer Review has a very specific meaning in academia.

Academic journals use panels of reviewers with experience in the fields they cover to screen manuscripts submitted to them and determine whether they are worthy of being published by the journal.  These reviewers are the "peers" of the authors and papers they accept for publication are said to have been "peer reviewed."

Note that acceptance for publication does not necessarily mean that the reviewers agree with a paper, just that they find it to be good enough to be published in that journal. Thus "peer review' is (or at least should be - there are many abuses in practice) more of a triage process than an endorsement. The main purpose is to weed out the cranks and charlatans who would otherwise flood the journal just as they do the comments sections of many blogs.

barney

  • Charter Member
  • Joined in 2006
  • ***
  • Posts: 1,282
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #60 on: October 22, 2013, 03:25:50 PM »
there are many abuses in practice

And that was my attempted point.  While I'm aware of the [stricter] meaning of the term, I seldom [when actually published] see it in action.  The petty jealousies of academia far surpass anything to be found in the commercial [real?] world, and ofttimes colour committee judgements.  At the end of the day, that formal process devolved to the personal judgements of the adjudicators, not all of whom are impartial and non-judgemental.  Same thing applies to the corporate world, although the motivations there [usually] tend to be a bit more clear.  Been, unfortunately, involved in both worlds, but when I discovered that my compatriots did not try as hard [as me] to clear their thoughts of possible prejudices and biases, and since I had neither the resource(s) nor the power to enforce such, I relegated the process to my mental scrapheap, where it has remained for the last decade or so.  Still read 'em, when available, but no longer place any credence in 'em.

Renegade

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 13,220
  • Tell me something you don't know...
    • View Profile
    • Renegade Minds
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #61 on: October 29, 2013, 04:49:06 AM »
Most scientific research is bunk? Apparently so...

http://reason.com/bl...entific-results-bunk

http://www.latimes.c...027,0,1228881.column

http://www.plosmedic...journal.pmed.0020124

Ahem... People were extolling the virtues of "peer review"? Hahahahaaha~! ;D 8)
Slow Down Music - Where I commit thought crimes...

Freedom is the right to be wrong, not the right to do wrong. - John Diefenbaker

IainB

  • Supporting Member
  • Joined in 2008
  • **
  • Posts: 6,136
  • Slartibartfarst
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #62 on: November 03, 2013, 08:09:58 PM »
Interesting perspective of peer-reviewed literature from a science journalist - John Horgan - who kicks himself for a lack of investigation when writing an article in 1983 about Jerrold S. Petrofsky, a biomedical engineer at Wright State University who had been trying to help paralyzed patients walk by electrically stimulating their muscles with a computer-controlled device.
In 1985 he finally completed some investigation and wrote a report which corrected the matter, but nearly got him into some trouble - until his report was subsequently vindicated.
(see the link to the PLOS Medicine paper.)
(Otherwise copied below sans embedded hyperlinks/images.)
Quote
Cross-Check: Critical views of science in the news
By John Horgan | November 2, 2013

I’m moving soon, and so I’m riffling through the files I’ve accumulated in my decades as a science writer and chucking those I’ll never (I hope) need. Carrying out this archaeological dig into the strata of my career, I’m struck once again by all the “breakthroughs” and “revolutions” that have failed to live up to their hype: string theory and other supposed “theories of everything,” self-organized criticality and other theories of complexity, anti-angiogenesis drugs and other potential “cures” for cancer, drugs that can make depressed patients “better than well,” “genes for” alcoholism, homosexuality, high IQ and schizophrenia.

Nanette Davis stands at the podium during her graduation from Wright State University in 1983, an event that raised hopes that electrical stimulation of muscles would soon help paralyzed people regain control of their limbs. Davis was helped to the podium by engineer Jerrold Petrofsky (right), the designer of her muscle-stimulation system.

I graduated from journalism school in 1983 hoping to celebrate scientific advances, but from the start reality thwarted my intentions. I got a job as a staff writer for the Institute of Electrical and Electronics Engineers, a trade association. One of my first assignments was profiling Jerrold S. Petrofsky, a biomedical engineer at Wright State University trying to help paralyzed patients walk by electrically stimulating their muscles with a computer-controlled device.

Petrofsky was a lavishly honored star of the IEEE, whose research had reportedly enabled Nanette Davis, a paralyzed student at Wright State, to walk on stage and receive her diploma during her June, 1983, graduation ceremony. His work was lauded by major media, including the BBC, TIME, Newsweek, Nova and 60 Minutes. In 1985 CBS produced a television movie, First Steps, starring Judd Hirsch as Petrofsky.

I wrote a puff piece about Petrofsky–based primarily on interviews with him and materials supplied by him and Wright State–published in the November 1983 issue of The Institute, the monthly newspaper of the IEEE. It never occurred to me to question Petrosky’s claims. Who was I, a mere rookie, to second-guess him, Wright State and media like 60 Minutes?

Then other biomedical engineers wrote letters to me complaining that coverage of Petrofsky’s work was raising false hopes among paralyzed patients. At first, I thought these critics were just envious of Petrofsky’s fame, but when I investigated their complaints, they seemed to have substance.

I ended up writing an article, published in The Institute in May 1985, presenting evidence that Petrofsky’s methods for helping paralyzed subjects were less effective than he claimed. My original November 1983 article, which Petrosfsky had approved before publication, stated that Davis, while accompanied by Petrofsky during her graduation ceremony, controlled the stimulation of her own muscles and did not need his assistance.

Actually, Petrofsky held the device that stimulated Davis’s muscles, and he and another professor had to prop Davis up during the ceremony because the device malfunctioned. Davis also told me that before she met Petrofsky, she had trained herself to stand in leg braces for hours. In other words, her graduation feat was less impressive than it appeared. The muscle-stimulation method was also not risk free; Davis broke an ankle during a training session in 1984.

In my 1985 article, I argued that Petrofsky’s work raised questions that went beyond his case: “Has Petrofsky gone too far in seeking publicity for his work, as some of his peers suggest? Or should he be praised for being an effective communicator? In addressing these questions—which are echoed in other fields of research as well—perhaps some answers may be provided to a broader and more important question: What can engineers and scientists do to inform the public about their work, while ensuring that it is not misrepresented?”

This episode also taught me some lessons about science journalism that my subsequent experiences reinforced. First, researchers, when accused of hype, love to blame it on the media. But media hype can usually be traced back to the researchers themselves.

I also learned that critical journalism is much harder, more time-consuming and riskier than celebratory journalism. My 1985 investigation of Petrofsky, which I toiled over for months, made my editor so nervous that he wanted to bury it in the back pages of The Institute; I had to go over his head to persuade the publisher that my article deserved front-page treatment. After the article came out, the IEEE formed a panel to investigate not Petrofsky but me. The panel confirmed the accuracy of my reporting.

Since then, I keep struggling to find the right balance between celebrating and challenging alleged advances in science. After all, I became a science writer because I love science, and so I have tried not to become too cynical and suspicious of researchers. I worry sometimes that I’m becoming a knee-jerk critic. But the lesson I keep learning over and over again is that I am, if anything, not critical enough.

Arguably the biggest meta-story in science over the last few years—and one that caught me by surprise–is that much of the peer-reviewed scientific literature is rotten. A pioneer in exposing this vast problem is the Stanford statistician John Ioannidis, whose blockbuster 2005 paper in PLOS Medicine presented evidence that “most current published research findings are false.”

Discussing his findings in Scientific American two years ago, Ioannidis writes: “False positives and exaggerated results in peer-reviewed scientific studies have reached epidemic proportions in recent years. The problem is rampant in economics, the social sciences and even the natural sciences, but it is particularly egregious in biomedicine.”

In his recent defense of scientism (which I criticized on this blog), Steven Pinker lauds science’s capacity for overcoming bias and other human failings and correcting mistakes. But the work of Ioannidis and others shows that this capacity is greatly overrated.

“Academic scientists readily acknowledge that they often get things wrong,” The Economist states in its recent cover story “How Science Goes Wrong.” “But they also hold fast to the idea that these errors get corrected over time as other scientists try to take the work further. Evidence that many more dodgy results are published than are subsequently corrected or withdrawn calls that much-vaunted capacity for self-correction into question. There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think.”

So whatever happened to Petrofsky? He reportedly left Wright State in 1987 and ended up at Loma Linda University in California. The only article I could find online that mentions criticism of his work at Wright State is a 1985 New York Times report on the angry reaction of biomedical researchers to the film “First Steps.” As for Nanette Davis, after her famous 1983 graduation “walk” she “returned to her wheelchair,” according to a 2010 report in the Dayton Daily News. She is now a mother and teacher.

Photo credit: National Center for Rehabilitation Engineering, Wright State University, http://www.wright.ed...a.ash/publicity.html.
« Last Edit: November 03, 2013, 08:34:25 PM by IainB, Reason: Minor corrections. »

IainB

  • Supporting Member
  • Joined in 2008
  • **
  • Posts: 6,136
  • Slartibartfarst
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #63 on: November 27, 2013, 05:03:40 PM »
Most scientific research is bunk? Apparently so...
Ahem... People were extolling the virtues of "peer review"? Hahahahaaha~! ;D 8)
______________________
Was just reviewing the posts in this thread, and re-read the links you gave there. They are not opinion pieces, seem to be quite factual, and one is an abstract from a research paper.

Some useful conclusions:

IainB

  • Supporting Member
  • Joined in 2008
  • **
  • Posts: 6,136
  • Slartibartfarst
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #64 on: December 03, 2013, 05:02:24 AM »
Interesting post in ArsTechnica - doesn't seem to carry their usual editorial bias from John Timmer, so could be worth a read in the context of "peer review":
(Copied below sans embedded hyperlinks/images.)
Quote
Anti-GMO crop paper to be forcibly retracted
Journal editor recognizes extensive flaws, says the paper shouldn't have run.
by John Timmer - Dec 1, 2013 6:00 pm UTC

Last year, a French researcher made waves by announcing a study that suggested genetically modified corn could lead to an increased incidence of tumors in lab animals. But the way the finding was announced seemed designed to generate publicity while avoiding any scientific evaluation of the results. Since then, the scientific criticisms have rolled in, and they have been scathing. Now, the editor of the journal that published it has decided to pull the paper despite the objections of its primary author.

The initial publication focused on corn that had been genetically engineered to carry a gene that allowed it to break down a herbicide. French researchers, led by Gilles-Eric Séralini, fed the corn, with and without herbicide, to rats. Control populations were given the herbicide alone or unmodified corn. The authors concluded that the genetically modified corn led to an elevated incidence of tumors and early death.

But even a cursory glance at the results suggested there were some severe problems with this conclusion. To begin with, there were similar effects caused by both the genetically engineered crop and by the herbicide it was designed to degrade. None of the treatments showed a dose effect; in some cases, the lowest doses had the most dramatic effect. And, if the treatment populations were combined, in some cases they were healthier than the controls. Tests of whether the results were statistically significant were completely lacking.

And, since then, the scientific response has been withering. The German and EU food regulators looked the results over, but found them inadequate. The paper itself has accumulated a host of Letters to the Editor attached to it. And a different journal published an entire paper devoted to outlining its deficiencies.

All of these criticisms of the study could have been incorporated into the original press coverage, except for the fact that the people behind the study manipulated journalists to ensure that they were unable to obtain any outside evaluations of the paper. Nevertheless, even as the criticisms rolled in, the researchers remained defiant, and anti-GMO activists continued to promote the paper as evidence of the dangers posed by genetically modified crops.

Now, the editor of Food and Chemical Toxicology, the journal in which this study was published, has decided its flaws are so severe that including multiple Letters to the Editor with the study just won't cut it. In response to the initial complaints, he had set up a panel that looked over the paper and the additional data provided by Séralini. According to one letter from the editor, obtained by an anti-GMO activist group, "The panel had many concerns about the quality of the data and ultimately recommended that the article should be withdrawn." The editor has agreed with this recommendation and has already written a statement that will replace it.

Séralini has been asked to get in touch to discuss the details of the paper's withdrawal, but he has announced that he stands by his conclusions. This will ultimately force the editor to withdraw it over Séralini's objections.

This sort of retraction is a bit unusual. In one heavily publicized past example, a research group described bacteria that could supposedly replace phosphate with arsenate. Despite a large number of problems with that paper (including a failure to reproduce the original results), Science still hasn't pulled it. In contrast, a paper linking Chronic Fatigue Syndrome to a virus has been pulled, perhaps because there were more serious questions about the scientific procedures used to generate its results.

For the GMO paper, the situation is complex. According to the publisher, Elsevier, there is no evidence of any fraud or data manipulation. However, the number of animals used was insufficient to support any conclusions, and the paper certainly drew some. This goes against the journal's guidelines, and thus they seem to be admitting the paper should never have been published in the first place. That would seem to be a failure of editorial process and peer review, yet Elsevier states, "The peer review process is not perfect, but it does work."

(An additional problem that could justify retraction, noted by one of the papers linked above, is that animal welfare rules call for animals that develop tumors to be euthanized, while Séralini let the tumors grow to horrific sizes.)

If the precise grounds for retraction aren't entirely clear, the response of the groups campaigning against genetically modified foods is. A statement released by the group GMWatch basically says that the paper was fine, the editor is being unethical, and Monsanto might be behind it all. So, it seems that Séralini's paper will continue to be brought up long after it's removed from the formal scientific literature.

IainB

  • Supporting Member
  • Joined in 2008
  • **
  • Posts: 6,136
  • Slartibartfarst
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #65 on: December 03, 2013, 05:37:27 AM »
More reasoning from Judith Curry:
(Copied below sans embedded hyperlinks/images.)
Quote
Has science lost its way?
by Judith Curry
Posted on December 1, 2013 | 405 Comments

“The journals want the papers that make the sexiest claims. And scientists believe that the way you succeed is having splashy papers in Science or Nature — it’s not bad for them if a paper turns out to be wrong, if it’s gotten a lot of attention.” – Michael Eisen
...(Read the rest - here.)
« Last Edit: December 11, 2013, 12:13:05 PM by IainB, Reason: Added posting date/details of quoted article. »

IainB

  • Supporting Member
  • Joined in 2008
  • **
  • Posts: 6,136
  • Slartibartfarst
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #66 on: December 11, 2013, 12:48:10 PM »
I just checked and realised I had omitted to mention the date of the quote re Judith Curry's article, above, so have updated it.
The reason I was interested in its date is that, apparently quite by coincidence, the recent Nobel science prizewinner Randy Schekman has, a few days later, said something similar to Curry. In fact, he has gone even further and put his money where his mouth is by declaring a boycott of a few publishers (Nature, Cell, and Science) who seem to form an oligopoly of published "peer-reviewed" science research, deliberately encouraging/publishing many bad/fraudulent science papers and distorting the scientific process in order to boost their circulation figures.

The Guardian article below has it: (my emphasis)
(Copied below, with some embedded hyperlinks, but no images.)
Quote
Nobel winner declares boycott of top science journals | Science | The Guardian
Randy Schekman says his lab will no longer send papers to Nature, Cell and Science as they distort scientific process

    Ian Sample, science correspondent
    The Guardian, Monday 9 December 2013 19.42 GMT   

Image: Randy Schekman, centre, at a Nobel prize ceremony in Stockholm.

Leading academic journals are distorting the scientific process and represent a "tyranny" that must be broken, according to a Nobel prize winner who has declared a boycott on the publications.

Randy Schekman, a US biologist who won the Nobel prize in physiology or medicine this year and receives his prize in Stockholm on Tuesday, said his lab would no longer send research papers to the top-tier journals, Nature, Cell and Science.

Schekman said pressure to publish in "luxury" journals encouraged researchers to cut corners and pursue trendy fields of science instead of doing more important work. The problem was exacerbated, he said, by editors who were not active scientists but professionals who favoured studies that were likely to make a splash.

The prestige of appearing in the major journals has led the Chinese Academy of Sciences to pay successful authors the equivalent of $30,000 (£18,000). Some researchers made half of their income through such "bribes", Schekman said in an interview.

Writing in the Guardian, Schekman raises serious concerns over the journals' practices and calls on others in the scientific community to take action.

"I have published in the big brands, including papers that won me a Nobel prize. But no longer," he writes. "Just as Wall Street needs to break the hold of bonus culture, so science must break the tyranny of the luxury journals."

Schekman is the editor of eLife, an online journal set up by the Wellcome Trust. Articles submitted to the journal – a competitor to Nature, Cell and Science – are discussed by reviewers who are working scientists and accepted if all agree. The papers are free for anyone to read.

Schekman criticises Nature, Cell and Science for artificially restricting the number of papers they accept, a policy he says stokes demand "like fashion designers who create limited-edition handbags." He also attacks a widespread metric called an "impact factor", used by many top-tier journals in their marketing.

A journal's impact factor is a measure of how often its papers are cited, and is used as a proxy for quality. But Schekman said it was "toxic influence" on science that "introduced a distortion". He writes: "A paper can become highly cited because it is good science - or because it is eye-catching, provocative, or wrong."

Daniel Sirkis, a postdoc in Schekman's lab, said many scientists wasted a lot of time trying to get their work into Cell, Science and Nature. "It's true I could have a harder time getting my foot in the door of certain elite institutions without papers in these journals during my postdoc, but I don't think I'd want to do science at a place that had this as one of their most important criteria for hiring anyway," he told the Guardian.

Sebastian Springer, a biochemist at Jacobs University in Bremen, who worked with Schekman at the University of California, Berkeley, said he agreed there were major problems in scientific publishing, but no better model yet existed. "The system is not meritocratic. You don't necessarily see the best papers published in those journals. The editors are not professional scientists, they are journalists which isn't necessarily the greatest problem, but they emphasise novelty over solid work," he said.

Springer said it was not enough for individual scientists to take a stand. Scientists are hired and awarded grants and fellowships on the basis of which journals they publish in. "The hiring committees all around the world need to acknowledge this issue," he said.

Philip Campbell, editor-in-chief at Nature, said the journal had worked with the scientific community for more than 140 years and the support it had from authors and reviewers was validation that it served their needs.

"We select research for publication in Nature on the basis of scientific significance. That in turn may lead to citation impact and media coverage, but Nature editors aren't driven by those considerations, and couldn't predict them even if they wished to do so," he said.

"The research community tends towards an over-reliance in assessing research by the journal in which it appears, or the impact factor of that journal. In a survey Nature Publishing Group conducted this year of over 20,000 scientists, the three most important factors in choosing a journal to submit to were: the reputation of the journal; the relevance of the journal content to their discipline; and the journal's impact factor. My colleagues and I have expressed concerns about over-reliance on impact factors many times over the years, both in the pages of Nature and elsewhere."

Monica Bradford, executive editor at Science, said: "We have a large circulation and printing additional papers has a real economic cost … Our editorial staff is dedicated to ensuring a thorough and professional peer review upon which they determine which papers to select for inclusion in our journal. There is nothing artificial about the acceptance rate. It reflects the scope and mission of our journal."

Emilie Marcus, editor of Cell, said: "Since its launch nearly 40 years ago, Cell has focused on providing strong editorial vision, best-in-class author service with informed and responsive professional editors, rapid and rigorous peer-review from leading academic researchers, and sophisticated production quality. Cell's raison d'etre is to serve science and scientists and if we fail to offer value for both our authors and readers, the journal will not flourish; for us doing so is a founding principle, not a luxury."

• This article was amended on 10 December 2013 to include a response from Cell editor Emilie Marcus, which arrived after the initial publication deadline.

IainB

  • Supporting Member
  • Joined in 2008
  • **
  • Posts: 6,136
  • Slartibartfarst
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #67 on: January 13, 2014, 12:47:35 AM »
Depressing.
Oh, Never Mind: Top 5 Retracted Science Studies of 2013

Probably speaks volumes about the "reputable" (cough, cough) journals that publish some of this rot peer-reviewed research.

IainB

  • Supporting Member
  • Joined in 2008
  • **
  • Posts: 6,136
  • Slartibartfarst
    • View Profile
    • Donate to Member
The peer review game
« Reply #68 on: February 19, 2014, 03:23:33 AM »
It rather looks like someone at the U. of Bristol (UK) is prepared to address the real issue - the elephant in the room - i.e., "increasing concern surrounding the reproducibility of much published research". Some people may find it surprising that this is discussed in the journal Nature. Unfortunately, it is of course behind a paywall.
Bishop Hill mentions it:
_________________________________
The peer review game
Feb 19, 2014 Journals

There is an interesting letter in Nature this week. In-Uck Park of the University of Bristol and his colleagues have adopted something of a game-theoretic approach to try to understand aspects of the peer review process.
Quote
The objective of science is to advance knowledge, primarily in two interlinked ways: circulating ideas, and defending or criticizing the ideas of others. Peer review acts as the gatekeeper to these mechanisms. Given the increasing concern surrounding the reproducibility of much published research, it is critical to understand whether peer review is intrinsically susceptible to failure, or whether other extrinsic factors are responsible that distort scientists’ decisions. Here we show that even when scientists are motivated to promote the truth, their behaviour may be influenced, and even dominated, by information gleaned from their peers’ behaviour, rather than by their personal dispositions. This phenomenon, known as herding, subjects the scientific community to an inherent risk of converging on an incorrect answer and raises the possibility that, under certain conditions, science may not be self-correcting. We further demonstrate that exercising some subjectivity in reviewer decisions, which serves to curb the herding process, can be beneficial for the scientific community in processing available information to estimate truth more accurately. By examining the impact of different models of reviewer decisions on the dynamic process of publication, and thereby on eventual aggregation of knowledge, we provide a new perspective on the ongoing discussion of how the peer-review process may be improved.

Which is a pretty interesting result, and one which I think will ring true with many readers at BH at least. Here's an excerpt from the conclusions:
Quote
Science may ...not be as self-correcting as is commonly assumed, and peer-review models which encourage objectivity over subjectivity may reduce the ability of science to selfcorrect. Although herding among  agents is well understood in cases where the incentives directly reward acting in accord with the crowd (for example, financial markets), it is instructive to see that it can occur when agents (that is, scientists) are motivated by the pursuit of truth, and when gatekeepers (that is, reviewers and editors) exist with the same motivation. In such cases, it is important that individuals put weight on their private signals, in order to be able to escape from herding. Behavioural economic experiments indicate that prediction markets, which aggregate private signals acrossmarket participants, might provide information advantages.Knowledge in scientific research is often highly diffuse, across individuals and groups, and publishing and peer-review models should attempt to capture this.We have discussed the importance of allowing reviewers to express subjective opinions in their recommendations, but other approaches, such as the use of post-publication peer review, may achieve the same end.

IainB

  • Supporting Member
  • Joined in 2008
  • **
  • Posts: 6,136
  • Slartibartfarst
    • View Profile
    • Donate to Member
The fraud was apparently first reported in the journal Nature.
(Copied below sans embedded hyperlinks/images.)
Quote
Over 100 published science journal articles just gibberish

By Maxim Lott
Published March 01, 2014
FoxNews.com

Image:fake articles in science journals.jpg

Do scientific papers ever seem like unreadable gibberish to you? Well, sometimes they really are.

Some 120 papers published in established scientific journals over the last few years have been found to be frauds, created by nothing more than an automated word generator that puts random, fancy-sounding words together in plausible sentence structures. As a result they have been pulled from the journals that originally published them.

The fake papers are in the fields of computer science and math and have titles such as “Application and Research of Smalltalk Harnessing Based on Game-Theoretic Symmetries”; “An Evaluation of E-Business with Fin”; and “Simulating Flip-Flop Gates Using Peer-to-Peer Methodologies.” The authors of those papers did not respond to requests for comment from FoxNews.com.

This is not the first time nonsense papers have been published.

In 1996, as a test, a physics professor submitted a fake paper to the philosophy journal Social Text. His paper argued that gravity is “postmodern” because it is “free from any dependence on the concept of objective truth.” Yet it was accepted and published.

But how could gibberish end up in respectable science papers? The man who discovered the recent frauds said it showed slipping standards among scientists.

"High pressure on scientists leads directly to too prolific and less meaningful publications," computer scientist Cyril Labbé of Joseph Fourier University in France, told FoxNews.com.

But he has no explanation as to why the journals published meaningless papers.

"They all should have been evaluated by a peer-review process. I've no explanation for them being here. I guess each of them needs an investigation," he said.

The publishers also could not explain it, admitting that the papers “are all nonsense.”

“We are in the process of investigating… [and] taking the papers down as quickly as possible. A placeholder notice will be put up once the papers have been removed. Since we publish over 2,200 journals and 8,400 books annually, this will take some time,” Eric Merkel-Sobotta, a spokesman for the publisher Springer, which published 16 of the fake papers, told FoxNews.com.

The fraud was first reported in the journal Nature.

Labbé has made it his mission to detect fakes, and ironically has published a paper in a Springer journal about how to automatically detect fake papers. He also built a website that detects whether papers are computer generated.                                                                                     

“Our tools are very efficient to detect SCIgen papers and also to detect duplicates and plagiarisms,” Labbé said. SCIgen is the program that generates random papers.

Some professors said that pay rules that base professor salaries on the number of papers they publish may lead to fakes.

“Most schools have merit raise systems of some kind, and a professor’s merit score is affected by his or her success in publishing scholarly papers,” Robert Archibald, a professor of economics at the College of William and Mary, who studies the economics of higher education, told FoxNews.com.

He noted that because other professors may not read the paper, “publishing a paper that was computer-generated might help with merit pay.”

Labbé also said that overly numerical measures might encourage fraud.

“In aiming at measuring science it is perturbing science,” he said.

The author of this piece, Maxim Lott, can be reached on twitter at @maximlott or at maxim.lott@foxnews.com

ewemoa

  • Honorary Member
  • Joined in 2008
  • **
  • Posts: 2,845
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #70 on: March 04, 2014, 05:51:58 PM »
Thanks for that follow-up!

IainB

  • Supporting Member
  • Joined in 2008
  • **
  • Posts: 6,136
  • Slartibartfarst
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #71 on: March 04, 2014, 11:46:52 PM »
The fraud was apparently first reported in the journal Nature.
(Copied below sans embedded hyperlinks/images.)
Quote
Over 100 published science journal articles just gibberish
  • ...The fake papers are in the fields of computer science and math...

  • ...This is not the first time nonsense papers have been published. ...

  • ...But how could gibberish end up in respectable science papers? The man who discovered the recent frauds said it showed slipping standards among scientists.
    "High pressure on scientists leads directly to too prolific and less meaningful publications," computer scientist Cyril Labbé of Joseph Fourier University in France, told FoxNews.com.
    But he has no explanation as to why the journals published meaningless papers.
    "They all should have been evaluated by a peer-review process. I've no explanation for them being here. I guess each of them needs an investigation," he said. ...

  • ...The publishers also could not explain it, admitting that the papers “are all nonsense.”...

  • ...Some professors said that pay rules that base professor salaries on the number of papers they publish may lead to fakes.
    “Most schools have merit raise systems of some kind, and a professor’s merit score is affected by his or her success in publishing scholarly papers,” Robert Archibald, a professor of economics at the College of William and Mary, who studies the economics of higher education, told FoxNews.com.
    He noted that because other professors may not read the paper, “publishing a paper that was computer-generated might help with merit pay.”
    Labbé also said that overly numerical measures might encourage fraud.
    “In aiming at measuring science it is perturbing science,” he said.

Looks like this could be an absolutely classic own goal by the moronic academic administrations that subscribe to the outmoded and discarded management practice of making merit pay based on numerical measures. It is well-documented what happens if you do that: you get unintended consequences.
Points 10 and 11 of Deming's 14-point philosophy cover this very well:
  • 10. Eliminate slogans, exhortations and targets for the workforce asking for zero defects and new levels of productivity. Such exhortations only create adversarial relationships, as the bulk of the causes of low quality and low productivity belong to the system and thus lie beyond the power of the workforce.

  • 11. (a) Eliminate work standards (quotas) on the factory floor. Substitute leadership.
          (b) Eliminate management by objective. Eliminate management by numbers, numerical goals. Substitute leadership.

Therefore, it is arguably not so much a case of "it showed slipping standards among scientists" as that it showed that the university administration was effectively dictating a lowered standard as being necessary to achieve higher merit pay  - i.e., the more you publish, the more we'll pay you, regardless of the quality.

barney

  • Charter Member
  • Joined in 2006
  • ***
  • Posts: 1,282
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #72 on: March 05, 2014, 12:16:46 AM »
... the more you publish, the more we'll pay you, regardless of the quality.
Well, haven't we all - or at least most of us - been subjected to the same thing?

When management - any management, including our so-called peers - demands quantity, performance and quality degrade.  Not so certain that could not be called a force of nature.  When position becomes more important than performance, those in position punish those who do not perform to the satisfaction and gratification and reputation of those in position, no?  The powers that be, in most any venue, want accolades, rather than performance.  Recognition for private/personal performance is seldom rendered unless that recognition benefits those other than the performer.  My mind is awash with similes, but none compare with the reality of illusions fostered by governing bodies.

IainB

  • Supporting Member
  • Joined in 2008
  • **
  • Posts: 6,136
  • Slartibartfarst
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #73 on: March 05, 2014, 01:19:14 AM »
Well, haven't we all - or at least most of us - been subjected to the same thing?
A rhetorical question. I can't speak for anyone else, but in my case, the answer would be "Yes, but no", mainly because I have always resisted attempts to impose that sort of asinine method of motivation on me. It is insidious, self-destructive and corrosive of the human spirit, but probably more importantly, it is guaranteed to adversely affect quality of output of a process (Deming, et al - esp. the experiment with the red and white beads).
« Last Edit: March 06, 2014, 05:42:04 AM by IainB »

TaoPhoenix

  • Supporting Member
  • Joined in 2011
  • **
  • Posts: 4,550
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #74 on: March 05, 2014, 04:02:43 AM »
... the more you publish, the more we'll pay you, regardless of the quality.
Well, haven't we all - or at least most of us - been subjected to the same thing?

When management - any management, including our so-called peers - demands quantity, performance and quality degrade.  Not so certain that could not be called a force of nature.  When position becomes more important than performance, those in position punish those who do not perform to the satisfaction and gratification and reputation of those in position, no?  The powers that be, in most any venue, want accolades, rather than performance.  Recognition for private/personal performance is seldom rendered unless that recognition benefits those other than the performer.  My mind is awash with similes, but none compare with the reality of illusions fostered by governing bodies.

Just had this happen today!
 :o