ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > Living Room

Peer Review and the Scientific Process

<< < (22/47) > >>

But in order to be a far-right extremist, you only have to say something like "saving money is good" or "being in debt is bad."
-Renegade (December 02, 2014, 12:48 PM)
--- End quote ---

Not true. But you have your agenda I suppose. :P
-40hz (December 02, 2014, 01:48 PM)
--- End quote ---

Try discussing monetary policy, currency, etc. You'll very quickly find that only "far-right extremists" advocate things like that.

Here's a quick example from a professor of economics at Oxford:

So my argument is that Keynesian theory is not left wing...
--- End quote ---

Etc. etc.

Keynesian economics is framed as centrist. He puts fiscal conservatism on the far right. And he's far from alone.

Now, keep in mind chapter 2 of "The Communist Manifesto" and this:

Nevertheless, in most advanced countries, the following will be pretty generally applicable.
2. A heavy progressive or graduated income tax.
5. Centralisation of credit in the hands of the state, by means of a national bank with State capital and an exclusive monopoly.
--- End quote ---

Which are also central tenants of Keynesian economics. i.e. Progressive taxation and central banks.

I could blather on about this for quite some time, but I think I've sufficiently made my point: All you need to do to be a right-wing extremist is to advocate fiscal responsibility outside of a Keynesian/communist model.

NOTE: I've picked economics for the example deliberately as there is a stronger argument to be made that economics is a science compared to some other areas where someone might also be called "right-wing", though with much greater justification.

For a quick diversion there:

The advance of behavioural economics is not fundamentally in conflict with mathematical economics, as some seem to think, though it may well be in conflict with some currently fashionable mathematical economic models. And, while economics presents its own methodological problems, the basic challenges facing researchers are not fundamentally different from those faced by researchers in other fields. As economics develops, it will broaden its repertory of methods and sources of evidence, the science will become stronger, and the charlatans will be exposed.

• Robert J. Shiller, a 2013 Nobel laureate in economics, is Professor of Economics at Yale University.
--- End quote ---

And just some random, dissenting opinion... And one more from The Harvard Crimson...

Also, since the article itself is about "traditional hard science", it makes little sense to frame the example in the same terms. As to whether economics is a science, that all depends on who you listen to. The mainstream or establishment view is that it is a science. This is debatable. Karl Popper's "The Poverty of Historicism" (1936~1957 [a bit complicated - it was first a reading, then a full book but lacked publication for a number of years]) helps to clarify how the point can be contested. Part of the inspiration for the book was Popper wanting to illustrate how both communism and fascism drew inspiration from historicism. Expanding on the quote above:

Nevertheless, in most advanced countries, the following will be pretty generally applicable.

1. Abolition of property in land and application of all rents of land to public purposes.
2. A heavy progressive or graduated income tax.
3. Abolition of all rights of inheritance.
4. Confiscation of the property of all emigrants and rebels.
5. Centralisation of credit in the hands of the state, by means of a national bank with State capital and an exclusive monopoly.
6. Centralisation of the means of communication and transport in the hands of the State.
7. Extension of factories and instruments of production owned by the State; the bringing into cultivation of waste-lands, and the improvement of the soil generally in accordance with a common plan.
8. Equal liability of all to work. Establishment of industrial armies, especially for agriculture.
9. Combination of agriculture with manufacturing industries; gradual abolition of all the distinction between town and country by a more equable distribution of the populace over the country.
10. Free education for all children in public schools. Abolition of children’s factory labour in its present form. Combination of education with industrial production, &c, &c.
--- End quote ---

Each point made is clearly within the domain of economics, with only point #10 being remotely contestable.

So at a minimum we can see how Marxism (and thus Marxist economics) falls into that category of historicism that Popper wishes to attack. The same holds for fascism.

It might be useful to note that the first reading of the paper "The Poverty of Historicism" was at the invite of Friedrich von Hayek, a classical liberal economist, who at the time would have been considered more "centrist" than today. Here's a fun tidbit to help make that point:

In 1984, he was appointed a member of the Order of the Companions of Honour by Queen Elizabeth II on the advice of Prime Minister Margaret Thatcher for his "services to the study of economics".
--- End quote ---

I think we know where most people put Thatcher. :)

But, why would I spend so long blather on about Karl Popper?

Additionally, Peter Medawar, John Eccles and Hermann Bondi are amongst the distinguished scientists who have acknowledged their intellectual indebtedness to his work, the latter declaring that 'There is no more to science than its method, and there is no more to its method than Popper has said.'
--- End quote ---

Because in a thread about science it's useful to understand the man (and his writings) who basically defined science.

But back to the example being one of economics...

If we are to take science as neutral, but economics as political, can we take economics as science? That seems like a hard pill to swallow. Or do we take economics as potentially science, and potentially political, with some criteria by which we can separate the two? I would say that it lies entirely in the answer to whether or not its statements can be falsified.

If I can briefly rephrase my original 2 statements:

saving money is good

It is advantageous to have resources to freely draw upon at will

being in debt is bad

It is disadvantageous to have future labour allocated to uses that have no personal benefit (this could be better phrased, but close enough)

Whether or not those statements are falsifiable may be open to debate, but that saying them will end up with you being called "right-wing" isn't really up for debate because it happens. Regularly. The SPLC and Mark Potok are great examples there. :)

I'm not sure whether the term "peer review" carries any real weight nowadays - or whether it retains any scientific credibility or has any real meaning for science.
Interesting review of the state of affairs:
(Copied below sans embedded hyperlinks/images.)
DSHR's Blog: Stretching the "peer reviewed" brand until it snaps
Tuesday, January 6, 2015
Stretching the "peer reviewed" brand until it snaps
The very first post to this blog, seven-and-a-half years and 265 posts ago, was based on an NSF/JISC workshop on scholarly communication. I expressed skepticism about the value added by peer review, following Don Waters by quoting work from Diane Harley et al:

    They suggest that "the quality of peer review may be declining" with "a growing tendency to rely on secondary measures", "difficult[y] for reviewers in standard fields to judge submissions from compound disciplines", "difficulty in finding reviewers who are qualified, neutral and objective in a fairly closed academic community", "increasing reliance ... placed on the prestige of publication rather than ... actual content", and that "the proliferation of journals has resulted in the possibility of getting almost anything published somewhere" thus diluting "peer-reviewed" as a brand.

My prediction was:

    The big problem will be a more advanced version of the problems currently plaguing blogs, such as spam, abusive behavior, and deliberate subversion.

Since then, I've returned to the theme at intervals, pointing out that reviewers for top-ranked journals fail to perform even basic checks, that the peer-reviewed research on peer review shows that the value even top-ranked journals add is barely detectable, even before allowing for the value subtracted by their higher rate of retraction, and that any ranking system for journals is fundamentally counter-productive. As recently as 2013 Nature published a special issue on scientific publishing that refused to face these issues by failing to cite the relevant research. Ensuring relevant citation is supposed to be part of the value top-ranked journals add.

Recently, a series of incidents has made it harder for journals to ignore these problems. Below the fold, I look at some of them.

In November, Ivan Oransky at Retraction Watch reported that BioMed Central (owned by Springer) recently found about 50 papers in their editorial process whose reviewers were sock-puppets, part of a trend:

    Journals have retracted more than 100 papers in the past two years for fake peer reviews, many of which were written by the authors themselves.

Many of the sock-puppets were suggested by the authors themselves, functionality in the submission process that clearly indicates the publisher's lack of value-add. Nature published an overview of this vulnerability of peer review by Cat Ferguson, Adam Marcus and Oransky entitled Publishing: The peer-review scam that included jaw-dropping security lapses in major publisher's systems:

    [Elsevier's] Editorial Manager's main issue is the way it manages passwords. When users forget their password, the system sends it to them by e-mail, in plain text. For PLOS ONE, it actually sends out a password, without prompting, whenever it asks a user to sign in, for example to review a new manuscript.

In December, Oransky pointed to a study published in PNAS by Kyle Silera, Kirby Leeb and Lisa Bero entitled Measuring the effectiveness of scientific gatekeeping. They tracked 1008 manuscripts submitted to three elite medical journals:

    Of the 808 eventually published articles in our dataset, our three focal journals rejected many highly cited manuscripts, including the 14 most popular; roughly the top 2 percent. Of those 14 articles, 12 were desk-rejected. This finding raises concerns regarding whether peer review is ill-suited to recognize and gestate the most impactful ideas and research.

Desk-rejected papers never even made it to review by peers. Its fair to say that Silera et al conclude:

    Despite this finding, results show that in our case studies, on the whole, there was value added in peer review.

These were elite journals, so a small net positive value add matches earlier research. But again, the fact that it was difficult to impossible for important, ground-breaking results to receive timely publication in elite journals is actually subtracting value. And, as Oransky says:

    Perhaps next up, the authors will look at why so many “breakthrough” papers are still published in top journals — only to be retracted. As Retraction Watch readers may recall, high-impact journals tend to have more retractions.

Also in December, via Yves Smith, I found Scholarly Mad Libs and Peer-less Reviews in which Marjorie Lazoff comments on the important article For Sale: “Your Name Here” in a Prestigious Science Journal from December's Scientific American (owned by Nature Publishing). In it Charles Seife investigates sites such as:

    MedChina, which offers dozens of scientific "topics for sale" and scientific journal "article transfer" agreements.

Among other services, these sites offer "authorship for pay" on articles already accepted by journals. He also found suspicious similarities in wording among papers, including:

    "Begger's funnel plot" gets dozens of hits, all from China.“Beggers funnel plot” is particularly revealing. There is no such thing as a Beggers funnel plot. ... "It's difficult to imagine that 28 people independently would invent the name of a statistical test,"

Some of the similarities may be due to authors with limited English using earlier papers as templates when reporting valid research, but some such as the Begger's funnel plot papers are likely the result of "mad libs" style fraud. And Lazoff points out they likely used sockpuppet reviewers:

    Last month, Retraction Watch published an article describing a known and partially-related problem: fake peer reviews, in this case involving 50 BioMed Central papers. In the above-described article, Seife referred to this BioMed Central discovery; he was able to examine 6 of these titles and found that all were from Chinese authors, and shared style and subject matter to other “paper mill-written” meta-analyses.

Lazoff concludes:

    Research fraud is particularly destructive given traditional publishing’s ongoing struggle to survive the transformational Electronic Age; the pervasive if not perverse marketing of pharma, medical device companies, and self-promoting individuals and institutions using “unbiased” research; and today’s bizarrely anti-science culture. 

but goes on to say:

    Without ongoing attention and support from the entire medical and science communities, we risk the progressive erosion of our essential, venerable research database, until it finally becomes too contaminated for even our most talented editors to heal.

I'm much less optimistic. These recent examples, while egregious, are merely a continuation of a trend publishers themselves started many years ago of stretching the "peer reviewed" brand by proliferating journals. If your role is to act as a gatekeeper for the literature database, you better be good at being a gatekeeper. Opening the gate so wide that anything can get published somewhere is not being a good gatekeeper.

The fact that even major publishers like Nature Publishing are finally facing up to problems with their method of publishing that the scholars who research such methods have been pointing out for more than seven years might be seen as hopeful. But even if their elite journals could improve their ability to gatekeep, the fundamental problem remains. An environment where anything will get published, the only question is where (and the answer is often in lower-ranked journals from the same publishers), renders even good gatekeeping futile. What is needed is better mechanisms for sorting the sheep from the goats after the animals are published. Two key parts of such mechanisms will be annotations, and reputation systems.
--- End quote ---

I'm not sure whether the term "peer review" carries any real weight nowadays - or whether it retains any scientific credibility or has any real meaning for science.
Interesting review of the state of affairs:
-IainB (January 07, 2015, 07:30 AM)
--- End quote ---

I lost faith a couple decades ago. I've since only received confirmation in my apostasy.

In this discussion thread, in a response to @xtabber here, I drew three conclusions:
Some conclusions we could arrive at here would include:

* A. Truth: You can't make something true out of a collection of logical fallacies. That would be an assault upon reason. Once you accept one invalid premise, you can accept infinitely more.
However, the depressing reality seems too often to be that many people are so unable to think rationally for themselves that they seem gullible to this kind of barrage of logical fallacy. One's head would be full of a confusing and probably conflicting mass of invalid premises, with ergo no real knowledge or understanding of truth.

* B. Peer review per se is not crucial as it cannot and does not certainly establish truth: We have already seen, in this discussion thread and others - e.g., including, the thread on CAGW, Thermageddon? Postponed! - that there is plenty of evidence to demonstrate pretty conclusively that peer review is an unreliable instrument for determining truth, as it can be and has been, and probably will continue to be used/abused to rationalise whatever careless or unethical/misguided scientists might want, because they cannot otherwise scientifically prove a pet theory or preferred/biased conclusion.
This is also well-documented in the literature - e.g., including as referred to in one of the spoilers above("…on broken trust in peer review and how to fix it").

* C. Falsifiability is crucial:
Falsifiability or refutability is the property of a statement, hypothesis, or theory whereby it could be shown to be false if some conceivable observation were true. In this sense, falsify is synonymous with nullify, meaning not "to commit fraud" but "show to be false". Science must be falsifiable. - Wikipedia.

--- End quote ---
-IainB (October 05, 2013, 06:33 AM)
--- End quote ---

One of the things that often puzzles me is how easily we seem to be conned by false peer reviews and how we are seemingly so wilfully blind to the truth in things, and so I was very interested to read the lifehacker post Carl Sagan's Best Productivity Tricks, where it says:
(Copied below sans embedded hyperlinks/images.)
...Hone Your "Baloney Detection Kit"
Sagan was first and foremost a scientist, and that means he had a very specialized outlook on the world. In his book, The Demon Haunted World, he outlines what he calls his "baloney detection kit." The kit is essentially a means to test arguments and find fallacies. It's a great toolset for skeptical thinking. Here's part of his kit:

* Wherever possible there must be independent confirmation of the "facts."
* Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.
* Arguments from authority carry little weight — "authorities" have made mistakes in the past. They will do so again in the future. Perhaps a better way to say it is that in science there are no authorities; at most, there are experts.
* Spin more than one hypothesis. If there's something to be explained, think of all the different ways in which it could be explained. Then think of tests by which you might systematically disprove each of the alternatives. What survives, the hypothesis that resists disproof in this Darwinian selection among "multiple working hypotheses," has a much better chance of being the right answer than if you had simply run with the first idea that caught your fancy.
* Try not to get overly attached to a hypothesis just because it's yours. It's only a way station in the pursuit of knowledge. Ask yourself why you like the idea. Compare it fairly with the alternatives. See if you can find reasons for rejecting it. If you don't, others will.
Sagan's kit here isn't just for science, of course. It's great for everything, from presidential debates to statistics. When you challenge those biases, you walk away with a better point of view. It's also a good toolset if you're making an argument at work, giving a presentation in school, or even just taking on a lively debate at the dinner table. The better you are at detecting baloney, the better your arguments will be in the long run. ...
(Read the rest at the link.)

--- End quote ---

It's all "habits of mind" really - thinking skills (De Bono).

Regarding "Spin more than one hypothesis," academia is a complete laughingstock. Just look at how doctoral dissertations are done - create a hypothesis, then show it. This is simply idiotic. A thought out and careful hypothesis that proves to be false is also information, i.e. we now know that X is false, or, limiting the scope or field is also a valid assertion. But this doesn't happen when everyone tries to prove that their hypothesis is "true". Certainly, 'true' is sexier, but 'false' can also be useful, especially when we know why it is false.

But, just for kicks, here's a statement for people to mull over for a second, assess, and then click the spoiler. ;)

Statement) There are no studies that show that tobacco causes cancer.

SpoilerThat's actually true.

The studies are statistical and do not control for tobacco vs. tobacco laced with chemicals, which almost all tobacco is. e.g. Cigarettes are laced with ammonia because it freebases the nicotine and makes it 35x more powerful, and more addictive. There are many more chemicals added as well.

The question about what part of the cigarette causes cancer is unanswered. Is it the tobacco? Or something else?

Please - Do not read into that more than I've stated. I have not said "cigarettes are good and rainbow farting unicorns". Is it likely that tobacco by itself causes cancer? Probably. I don't know. I have no evidence to say one way or another, other than the flawed studies on it.

Oh, and just for fun... 1 more...

There are no peer-reviewed, double-blind, placebo-controlled studies to prove that large caliber bullets shot into the head cause death. :P

I like to pull that one out for those who are religious about science. They invariably very quickly retreat from their faith just long enough to start calling me names, then they return to their faith just as quickly. I find it probably just as amusing as they find it infuriating. :)


[0] Message Index

[#] Next page

[*] Previous page

Go to full version