Home | Blog | Software | Reviews and Features | Forum | Help | Donate | About us
topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • April 30, 2016, 11:57:38 AM
  • Proudly celebrating 10 years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Last post Author Topic: Peer Review and the Scientific Process  (Read 45204 times)

Renegade

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 13,187
  • Tell me something you don't know...
    • View Profile
    • Renegade Minds
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #200 on: December 26, 2015, 08:26:10 AM »
Tangentially related and funny:



Slow Down Music - Where I commit thought crimes...

Freedom is the right to be wrong, not the right to do wrong. - John Diefenbaker

IainB

  • Supporting Member
  • Joined in 2008
  • **
  • Posts: 5,822
  • Slartibartfarst
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #201 on: March 04, 2016, 05:25:07 AM »
Well this was a surprise! (NOT)
(There are some quite amusing bits in here.)
Psychologists Call Out the Study That Called Out the Field of Psychology
Quote
By Rachel E. Gross
458637324-amy-cuddy-speaks-onstage-during-cosmopolitan-magazines
Independent researchers have had trouble replicating the famous findings of Harvard psychologist Amy Cuddy (pictured).

Craig Barritt/Getty Images for Cosmopolitan magazine and WME Live

Remember that study that found that most psychology studies were wrong? Yeah, that study was wrong. That’s the conclusion of four researchers who recently interrogated the methods of that study, which itself interrogated the methods of 100 psychology studies to find that very few could be replicated. (Whoa.) Their damning commentary will be published Friday in the journal Science. (The scientific body that publishes the journal sent Slate an early copy.)
Rachel E. Gross Rachel E. Gross

Rachel E. Gross is a Slate editorial assistant.

In case you missed the hullabaloo: A key feature of the scientific method is that scientific results should be reproducible—that is, if you run an experiment again, you should get the same results. If you don’t, you’ve got a problem. And a problem is exactly what 270 scientists found last August, when they decided to try to reproduce 100 peer-reviewed journal studies in the field of social psychology. Only around 39 percent of the reproduced studies, they found, came up with similar results to the originals.

That meta-analysis, published in Science by a group called the Open Science Collaboration, led to mass hand-wringing over the “replicability crisis” in psychology. (It wasn’t the first time that the field has faced such criticism, as Michelle N. Meyer and Christopher Chabris have reported in Slate, but this particular study was a doozy.)

Now this new commentary, from Harvard’s Gary King and Daniel Gilbert and the University of Virginia’s Timothy Wilson, finds that the OSC study was bogus—for a dazzling array of reasons. I know you’re busy, so let’s examine just two.

The first—which is what tipped researchers off to the study being not-quite-right in the first place—was statistical. The whole scandal, after all, was over the fact that such a low number of the original 100 studies turned out to be reproducible. But when King, a social scientist and statistician, saw the study, he didn’t think the number looked that low. Yeah, I know, 39 percent sounds really low—but it’s about what social scientists should expect, given the fact that errors could occur either in the original studies or the replicas, says King.

His colleagues agreed, telling him, according to King, “This study is completely unfair—and even irresponsible.”

Upon investigating the study further, the researchers identified a second and more crucial problem. Basically, the OSC researchers did a terrible job replicating those 100 studies in the first place. As King put it: “You’d think that a test about replications would actually reproduce the original studies.” But no! Some of the methods used for the reproduced studies were utterly confounding—for instance, OSC researchers tried to reproduce an American study that dealt with Stanford University students’ attitudes toward affirmative action policies by using Dutch students at the University of Amsterdam. Others simply didn’t use enough subjects to be reliable.

The new analysis “completely repudiates” the idea that the OSC study provides evidence for a crisis in psychology, says King. Of course, that doesn’t mean we shouldn’t be concerned with reproducibility in science. “We should be obsessed with these questions,” says King. “They are incredibly important. But it isn’t true that all social psychologists are making stuff up.”

After all, King points out, the OSC researchers used admirable, transparent methods to come to their own—ultimately wrong—conclusions. Specifically, those authors made all their data easily accessible and clearly explained their methods—making it all the easier for King and his co-authors to tear it apart. The OSC researchers also read early drafts of the new commentary, helpfully adding notes and clarifications where needed. “Without that, we wouldn’t have been able to write our article,” says King. Now that’s collaboration!

“We look forward to the next article that tries to conclude that we’re wrong,” he adds.

IainB

  • Supporting Member
  • Joined in 2008
  • **
  • Posts: 5,822
  • Slartibartfarst
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #202 on: March 10, 2016, 02:29:12 AM »
@Renegade: By the way, I just watched Alternate Viewpoint-Cancelling Headphones - We the Internet Sketch 9
I've met quite a few people who might like to buy these headphones, so, thanks for the link.    ;)

IainB

  • Supporting Member
  • Joined in 2008
  • **
  • Posts: 5,822
  • Slartibartfarst
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #203 on: March 16, 2016, 05:50:40 PM »
I had always been interested in "shaken baby syndrome" since watching a UK TV documentary about it as a child. It was the first time that I learned that parents could snap and lose control and actually seriously harm their babies out of a sort of mental state of pent-up frustrated anger, without actually intending to harm them.
Whilst I had realised that diagnosis was based on a hypotheses rather than established facts, I had not realised that in the UK one is apparently not allowed to talk about it as being a hypothesis, but only as an established fact.
Quote
This shaken baby syndrome case is a dark day for science – and for justice
Clive Stafford Smith

A leading doctor faces being struck off for challenging the theory about the infant condition. It’s like Galileo all over again
Image: Father holds newborn baby on shoulder
‘Shaken baby syndrome is almost unique among medical diagnoses in that it is not focused on treating the child.’ Photograph: Moodboard/Alamy

Monday 14 March 2016 09.30 GMT
Last modified on Tuesday 15 March 2016 11.42 GMT

On Friday, I witnessed something akin to a reenactment of the trial of Galileo, precisely four centuries after the original. Dr Waney Squier faces being struck off by the General Medical Council (GMC) for having the temerity to challenge the mainstream theory on shaken baby syndrome (SBS).

For years, the medical profession has boldly asserted that a particular “triad” of neurological observations is essentially diagnostic of SBS. Since the Nuremberg Code properly prevents human experimentation, this is an unproved hypothesis, and there has been rising doubt as to its validity.

Doctor who doubted shaken baby syndrome misled courts, panel rules

I am convinced that Squier is correct, but one does not have to agree with me to see the ugly side to the GMC prosecution: the moment that we are denied the right to question a scientific theory that is held by the majority, we are not far away from Galileo’s predicament in 1615, as he appeared before the papal inquisition. He dared to suggest that the Bible was an authority on faith and morals, rather than on science, and that 1 Chronicles 16:30 – “the world is firmly established, it cannot be moved” – did not mean that the Earth was rigidly lodged at the epicentre of the universe. It was not until 1982 that Pope John Paul II issued a formal admission that the church had got it wrong.

Shaken baby syndrome is almost unique among medical diagnoses in that it is not focused on treating the child. If an infant has bleeding on the brain (a subdural hematoma), the doctor wants to relieve the pressure – it is of little relevance how the infant came about the injury. SBS is, then, a “diagnosis” of a crime rather than an illness, and when a brain surgeon comes into the courtroom and “diagnoses” guilt, the defendant, mostly a parent, is likely to go to prison – or worse.

I have defended a number of emotionally charged capital cases where doctors have opined that a child had to have been shaken by an angry parent because it was “impossible” for the triad of neurological sequelae to result from an accident – it “had” to be caused by shaking. Many American doctors adhere to a bizarre notion that an infant cannot suffer a fatal head injury from a fall of less than three storeys. While we cannot drop a series of infants on their heads to test this, it would appear to be plain folly. The velocity of a five-foot fall means a child’s head can hit the ground at roughly 15mph, which is faster than most people – short of Usain Bolt - can sprint. I invited a series of neurosurgeons to run headlong into a hardwood wall in one courtroom, so we could see what happened to them. They politely declined, and stuck to their silly theory.

    What other doctor will be prepared to question the prosecution theory if it means the end of a career?

Squier has now been branded a “liar” by the panel, and found “guilty” of paying insufficient respect to her peers. Dr Michael Powers, perhaps the eminent QC in the area of medico-legal practice in the UK, believes that the GMC tribunal – made up of a retired wing commander, a retired policeman and a retired geriatric psychiatrist – was not qualified to understand the complex pathology of the developing brain. “It is therefore sad, but not surprising, that they have reached the wrong conclusion,” he said. “The proper forum for debating these issues is the international neuroscience community.”

Powers has a point: Michele Codd, the chair of the panel, was a general duties officer in the RAF for 32 years. One might doubt whether Stephen Marr, a retired Merseyside police officer, would hold up a constable’s hand to a prosecution theory that has sent so many people to prison.

Nisreen Booya was the sole person with any meaningful medical qualifications on the panel, but in a rather different area: she is a retired psychiatrist specialising in geriatric issues such as Alzheimer’s, an illness that, like infant head trauma, is “poorly understood”. She is quoted as saying that she “made a career of trying to provide innovative services” in her field – and yet she condemns Squier for thinking outside her own rigid box. All three are doubtless honourable people, but they are simply wrong to hold SBS up as the fifth gospel.

At the risk of being diagnosed with “I told you so” syndrome, I wrote an article 20 years ago questioning whether forensic hair analysis was really science. I was pleased therefore when, in 2015, the FBI admitted that they had got it wrong for decades – but this came after thousands of men, women and children had been convicted on the basis of latter-day snake oil, and scores had been sent to death row.

Those deemed to be blasphemers often suffer a gruesome fate. Although Squier may be struck off, at least she will not be burned at the stake. But the impact on medical science will be immense, because what other doctor will be prepared to question the prosecution theory if it means the end of a career? This is a very dark day for science, as it is for justice.
______________
« Last Edit: March 16, 2016, 06:23:15 PM by IainB »

IainB

  • Supporting Member
  • Joined in 2008
  • **
  • Posts: 5,822
  • Slartibartfarst
    • View Profile
    • Donate to Member

Renegade

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 13,187
  • Tell me something you don't know...
    • View Profile
    • Renegade Minds
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #205 on: April 18, 2016, 11:32:48 PM »
http://theweek.com/a...1/big-science-broken

Quote
Big Science is broken


Science is broken.

That's the thesis of a must-read article in First Things magazine, in which William A. Wilson accumulates evidence that a lot of published research is false. But that's not even the worst part.

Advocates of the existing scientific research paradigm usually smugly declare that while some published conclusions are surely false, the scientific method has "self-correcting mechanisms" that ensure that, eventually, the truth will prevail. Unfortunately for all of us, Wilson makes a convincing argument that those self-correcting mechanisms are broken.

For starters, there's a "replication crisis" in science. This is particularly true in the field of experimental psychology, where far too many prestigious psychology studies simply can't be reliably replicated. But it's not just psychology. In 2011, the pharmaceutical company Bayer looked at 67 blockbuster drug discovery research findings published in prestigious journals, and found that three-fourths of them weren't right. Another study of cancer research found that only 11 percent of preclinical cancer research could be reproduced. Even in physics, supposedly the hardest and most reliable of all sciences, Wilson points out that "two of the most vaunted physics results of the past few years — the announced discovery of both cosmic inflation and gravitational waves at the BICEP2 experiment in Antarctica, and the supposed discovery of superluminal neutrinos at the Swiss-Italian border — have now been retracted, with far less fanfare than when they were first published."

What explains this? In some cases, human error. Much of the research world exploded in rage and mockery when it was found out that a highly popularized finding by the economists Ken Rogoff and Carmen Reinhardt linking higher public debt to lower growth was due to an Excel error. Steven Levitt, of Freakonomics fame, largely built his career on a paper arguing that abortion led to lower crime rates 20 years later because the aborted babies were disproportionately future criminals. Two economists went through the painstaking work of recoding Levitt's statistical analysis — and found a basic arithmetic error.

Then there is outright fraud. In a 2011 survey of 2,000 research psychologists, over half admitted to selectively reporting those experiments that gave the result they were after. The survey also concluded that around 10 percent of research psychologists have engaged in outright falsification of data, and more than half have engaged in "less brazen but still fraudulent behavior such as reporting that a result was statistically significant when it was not, or deciding between two different data analysis techniques after looking at the results of each and choosing the more favorable."

Then there's everything in between human error and outright fraud: rounding out numbers the way that looks better, checking a result less thoroughly when it comes out the way you like, and so forth.

More at the link.

Link to the article at First Things:

http://www.firstthin...5/scientific-regress

Quote
SCIENTIFIC REGRESS

he problem with ­science is that so much of it simply isn’t. Last summer, the Open Science Collaboration announced that it had tried to replicate one hundred published psychology experiments sampled from three of the most prestigious journals in the field. Scientific claims rest on the idea that experiments repeated under nearly identical conditions ought to yield approximately the same results, but until very recently, very few had bothered to check in a systematic way whether this was actually the case. The OSC was the biggest attempt yet to check a field’s results, and the most shocking. In many cases, they had used original experimental materials, and sometimes even performed the experiments under the guidance of the original researchers. Of the studies that had originally reported positive results, an astonishing 65 percent failed to show statistical significance on replication, and many of the remainder showed greatly reduced effect sizes.

Their findings made the news, and quickly became a club with which to bash the social sciences. But the problem isn’t just with psychology. There’s an ­unspoken rule in the pharmaceutical industry that half of all academic biomedical research will ultimately prove false, and in 2011 a group of researchers at Bayer decided to test it. Looking at sixty-seven recent drug discovery projects based on preclinical cancer biology research, they found that in more than 75 percent of cases the published data did not match up with their in-house attempts to replicate. These were not studies published in fly-by-night oncology journals, but blockbuster research featured in Science, Nature, Cell, and the like. The Bayer researchers were drowning in bad studies, and it was to this, in part, that they attributed the mysteriously declining yields of drug pipelines. Perhaps so many of these new drugs fail to have an effect because the basic research on which their development was based isn’t valid.

When a study fails to replicate, there are two possible interpretations. The first is that, unbeknownst to the investigators, there was a real difference in experimental setup between the original investigation and the failed replication. These are colloquially referred to as “wallpaper effects,” the joke being that the experiment was affected by the color of the wallpaper in the room. This is the happiest possible explanation for failure to reproduce: It means that both experiments have revealed facts about the universe, and we now have the opportunity to learn what the difference was between them and to incorporate a new and subtler distinction into our theories.

The other interpretation is that the original finding was false. Unfortunately, an ingenious statistical argument shows that this second interpretation is far more likely. First articulated by John Ioannidis, a professor at Stanford University’s School of Medicine, this argument proceeds by a simple application of Bayesian statistics. Suppose that there are a hundred and one stones in a certain field. One of them has a diamond inside it, and, luckily, you have a diamond-detecting device that advertises 99 percent accuracy. After an hour or so of moving the device around, examining each stone in turn, suddenly alarms flash and sirens wail while the device is pointed at a promising-looking stone. What is the probability that the stone contains a diamond?

Most would say that if the device advertises 99 percent accuracy, then there is a 99 percent chance that the device is correctly discerning a diamond, and a 1 percent chance that it has given a false positive reading. But consider: Of the one hundred and one stones in the field, only one is truly a diamond. Granted, our machine has a very high probability of correctly declaring it to be a diamond. But there are many more diamond-free stones, and while the machine only has a 1 percent chance of falsely declaring each of them to be a diamond, there are a hundred of them. So if we were to wave the detector over every stone in the field, it would, on average, sound twice—once for the real diamond, and once when a false reading was triggered by a stone. If we know only that the alarm has sounded, these two possibilities are roughly equally probable, giving us an approximately 50 percent chance that the stone really contains a diamond.

More at that link as well.
Slow Down Music - Where I commit thought crimes...

Freedom is the right to be wrong, not the right to do wrong. - John Diefenbaker

IainB

  • Supporting Member
  • Joined in 2008
  • **
  • Posts: 5,822
  • Slartibartfarst
    • View Profile
    • Donate to Member
Re: Peer Review and the Scientific Process
« Reply #206 on: April 27, 2016, 11:45:13 PM »
@Renegade: ^^ Those are very interesting links, thanks.
True, but somewhat depressing though.