One of my all time favourite bloggers, Oxford Neuroscientist Prof. Dorothy Bishop, or DeevyBee as she is known on Twitter has performed an amazing open access lecture focusing largely on theBad Neuroscience1 The Science of Bad Neuroscience misunderstanding of neuroscience (click down to the “Emanuel Miller Lecture” to play the video). The talk is incredibly informative and digestible, even those with no understanding of neuroscience or psychology whatsoever will take a great deal away. The problem of the poor understanding of neuroscience is one of the main reasons why I started this blog, so if you like this blog then you’ll love this lecture.

Click here to download the slides.

The talk begins with a reasoned explanation of how and when we should be sceptical of neuroscience research, Bishop goes on to cite 4 key reasons why certain kinds of scientific research will inevitbaly result in false-positives:

“The four horseman of the apocalypse”

1. Maturation – People develop naturally over time.

“There seems to be an implicit assumption that the brain, because it is a physical organ is somehow not going to change unless you give it some intervention – that it is there as a static thing. This is completely untrue… as evidenced by this series of images.”

Changes in the brain over time The Science of Bad Neuroscience

The brain changes naturally over time

2.  Practice effects – when people keep doing the same test again and again, they get better at it.

“…purely to do with the fact that you have got better at doing the test and nothing to do with your abilities… People forget that this can apply to language tests and thing like that. It also applies to some extent to the brain, often we don’t know how important this is because brain imaging is so new.. clearly if you get brain responses to novelty, that means if you do something twice – the first time round you will get different responses to the second time round when it is no longer novel”.

3. Regression to the mean  – a statistical artefact of longitudinal studies that is exacerbated if you select participants on the basis of a low score on a test (for example participants with developmental difficulties). Bishop does an outstanding job of explaining the problem at about 18 minutes in to the talk.

“Regression to the mean is as inevitable as death and taxes”

Campbell and Kenny (1999) A primer on regression artefacts

4. The placebo effect. This is the obvious consideration that continues to impact poorly designed research but according to Bishop, the three issues listed above could actually be having an even greater impact than the placebo effect.

The Solution?

Bishop explains that a control group is vital in order to achieve valid findings, but a control group alone is not enough, we should also be asking questions such as:

  • Are the groups randomly assigned – or is there some other factor at play?
  • Is the control group given an alternative treatment? If not, why not?
  • What causes drop out? People don’t tend to drop out at random and this can have a very big effect on results.
If something smells fishy, it probably is fishy.
Sometimes things just go wrong and currently in the field of brain imaging, an awful lot of things have been going wrong. This is well illustrated by the now famous study of the dead fish in the brain scanner. A result was found in two different trials where a dead fish was asked to determine facial expressions. For this reason, all research – but particularly abstract research such as brain imaging research – should be taken with a pinch of salt until the results have been replicated, ideally a few times.
Smells fishy dead fish brain scanner The Science of Bad NeuroscienceIf you like this lecture subscribe to Dorothy Bishop’s blog, it is one of those blogs that is so useful that frankly, it should be required reading for all concerned. Also, keep an eye out for Dorothy’s forthcoming paper where she will be publishing some of the ideas she presented in the Emanuel Miller Lecture.


Weisberg, Deena Skolnick. (2008). The Seductive Allure of Neuroscience Explanations. Journal of Cognitive Neuroscience, 18 (3), 229-477 DOI: 10.1162/jocn.2008.20040

Campbell and Kenny (1999) A primer on regression artefacts

Follow Neurobonkers on TwitterFacebookGoogle+RSS, or join the mailing list.
Tagged with:

Bullshit1 Is this journal for real?Is this paper reliable? A question that every academic will have asked themselves. We have all stumbled across a paper in a journal that on closer inspection isn’t really a journal at all.

This year 134 suspect new journals have appeared from the abyss, all published by the same clandestine company Scientific & Academic Publishing, USA. Scientists have been quick to raise the alarm and ruthless in their response (1)(2).

Scientific Academic Publishing USA Is this journal for real?The publisher seems keen to announce all of these new journals at once and accept papers yet most of the positions on the journals ranging from “editor in chief” or “editorial board member” to “reviewer” seem to be vacant. Not to worry, just send in “a brief resume (and) SAP will revert (sic) back within two working days”. I am trying to be very careful not to pass judgement on this new group, the internet will do that itself. The chatter in the blogosphere is not promising however and the discussion on their Facebook page is even less supportive.

Scientific Academic Publishing USA facebook 527 Is this journal for real?

Peer review serves a vital purpose, it is still the foundation of how scientists operate but the system is far from fool proof. Only last week I blogged about an oncology paper published in the “Breast Journal” that has “15 references, but they’re all about sex, not cancer”. The rapid and durable reaction of the science blogosphere helps immunise us against the pernicious effect of Bad Science that Ben Goldacre documents on a weekly basis.

We shouldn’t be too swift to disregard attempts to sidestep traditional methods of publishing however. Heather Morrison is one researcher stepping outside the box. She is publishing her thesis as she writes it and is inviting comment along the way. You don’t even have to wait for her to publish her work if you wish to use it, even her draft is published under “the Creative Commons Attribution-NonCommercial-ShareAlike 2.5” licence. Her work in progress Scholarly Communication in Crisis draws eye watering conclusions regarding the profits of the four largest publishers who own the majority of the academic publishing market:

All are in the for-profit sector, and the profits are enormous. As reported in the Economist (2011): “ Elsevier, the biggest publisher of journals with almost 2,000 titles, cruised through the recession. Last year it made £724m ($1.1 billion) on revenues of £2 billion—an operating-profit margin of 36%”. Springer’s Science + Business Media (2010) reported a return on sales (operating profit) of 33.9% or € 294 million on revenue of € 866 million, an increase of 4% over the profit of the previous year. In the first quarter of 2012, John Wiley & Sons (2011) reported profit of $106 million for their scientific, medical, technical and scholarly division on revenue of $253 million, a profit rate of 42%. This represents an increase in the profit rate of 13% over the previous year. The operating profit rate for the academic division of Informa.plc (2011, p. 4) for the first half of 2011 was 32.4%, or £47 million on revenue of £145 million, an increase of 3.3% over the profit of the previous year.

Scholarly Communication in Crisis by Heather Morrison (Open access)

These huge profit margins are not only increasing far above the rate of inflation but publishers are now actively forcing third world countries out of academia

To our dismay and anger, a few international STM publishers, using their monopolistic position, recently demand to raise the subscription prices for their full-text database at a yearly rate of more than 14% for the next 3 years and by 2020, to raise the prices for developing countries to the level of those of the developed countries.

National Science Library of the Chinese Academy of Sciences (2010) cited in Scholarly Communication in Crisis by Heather Morrison (Open access)

We expect our institutions to spend millions on journal subscriptions, this is deemed essential so the fruits of knowledge help fund the research of the future. The system fails when over a third of the cash-flow is creamed off the scientific process during the publishing stage. Moreover, all those outside leading western universities are left without access to journals (assuming they do not have $799 dollars plus to spare for a subscription or $25 for one-time access, for one day, at one computer (Sage, 2011). Ironically much research is therefore unavailable to the very people it is designed to help.  This problem is something all scientists should be aware of, perhaps not only the Medline ranking but also the profit margins of publishers should be a key consideration of what journal to publish with. (Edit 16/01/2012: On the topic of impact factors, it has been revealed that some publishers are now requiring researchers to cite recent research from their own journal in a bid to artificially boost their journal’s rankings, a very dodgy practice indeed.)

The cruel irony is that for research to be placed under lock and key in a for-profit journal, the researcher will have paid for education and time spent researching; then the researcher’s institution must spend hundreds on hotel fees and flights as well as many hundreds of pounds for a conference ticket, plus an insertion fee, often based on the number of pages in the publication (as is the procedure in the world’s largest engineering journal). Even open access publishers charge hefty fees, BioMedCentral’s fee of $1,640 is described as average, even PLoS charge $1,300 to $2,900 to publish. Surely a happy medium can be found, unfortunately for Scientific & Academic Publishing, USA the blunderbuss approach is unlikely to be the way to the hearts and minds of the academic community but we shall have to wait and see.

Upon discovering that research isn’t published in a reputable journal, perhaps the real question on our lips should not be is this journal for real? But rather, has this researcher proven that they have applied appropriate rigorous controls and is there a free and fair forum for the researcher’s conclusions to be questioned? 

Morrison, Heather. (2012). Scholarly Communication in Crisis. Freedom for scholarship in the internet age. Simon Fraser University School of Communication. Doctoral dissertation (in process)

Update 16/01/2012: Further reading from today’s Guardian: Academic publishers have become the enemies of science

Follow Neurobonkers on TwitterFacebookGoogle+RSS, or join the mailing list.
Tagged with:

Do you ever feel like you are living in a rabbit hole? The same recycled news, phone tapping, phone tapping, phone tapping. The internet is making you stupid. Blah. Blah. Blah?Is Google making us Stupid Google is Destroying Your Memory. Sorry WHAT?

Once again a controversial academic paper is claiming that the internet is damaging our ability to recall, or at least changing the way we think. This time it has appeared in the journal Science titled “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips” (£). We’ve discussed previously the vast ammounts of unfounded conjecture surrounding this topic but until now there has been little (if any) published research that comes close to shedding any light on the issue. Somewhat unsurprisingly the paper is being taken very seriously, however on closer inspection it presents a far from watertight case. As Buldric might say, this case is in fact so leaky a marauding lascivious nun wouldn’t use it to surreptitiously store her illicit liquor stash. Darling.

Leaky Darrrrling Google is Destroying Your Memory. Sorry WHAT?

Darling's Leaky Case

The paper uses an interesting (ahem) technique of measuring how much participants are thinking of computers when asked to recall information. The method called the “stroop test” is traditionally used as a texbook measure of attention.

stroop Google is Destroying Your Memory. Sorry WHAT?

Why is this pattern familiar? Are you thinking what I'm thinking..?

In the experiment 106 Harvard graduates were given trivia questions. After this happened, coloured words either relating to computers or not relating to computers popped up and participants would have to say the colour of the word. This is the crucial bit. The logic that the “findings” that “Google effects memory” depend on, is based on the presumption that if the Harvard grads were already thinking of googling the answer then this would delay their response upon seeing the word “Google” (or “Yahoo”) in a stroop test. Now, as always, I hate to throw a spanner in the works of a watertight hypothesis but there does seem to be a slight confounding variable in the fact that the Google logo is erm, multi-*******-coloured.

Google Google is Destroying Your Memory. Sorry WHAT?


I’m always struck by the leap of faith that goes in to reaching conclusions in studies such as this but this time it just seems plain ridiculous. The researchers claim that:

“People who have been disposed to think about a certain topic [i.e. internet search providers] typically show slowed reaction times (RTs) for naming the color of the word when the word itself is of interest and is more accessible, because the word captures attention and interferes with the fastest possible color naming.”

One of the things I tend to find a bit odd is that such tiny results can be used to reach such sweeping conclusions, in this study the difference in reaction time between the “computer terms” and the “general terms” was a fraction of a second…

Google memory Google is Destroying Your Memory. Sorry WHAT?

Difference in reaction times on Stroop task after hearing each word

Never mind the monster of a confounding variable that the Google logo is famously multi-[deep breaths now]-coloured but surely there are positively dozens of other factors at work such as that the terms “Google” and “Yahoo” are likely to ellicit far more complex ideas and memories than the control words “Nike” and “Target”. I mean come on, the mere words “Nike” and “Target” are unlikely to excite even the most hard core sportswear fans let alone a bunch of Harvard graduates.

Nike Jokers Google is Destroying Your Memory. Sorry WHAT?

Hardcore Sportswear Fans

Come to think of it I’m pretty sure there are plently of Harvard graduates that would have loved nothing more than to have been the ones to come up with the code underlying Google (Yahoo, not so much).

google party1 Google is Destroying Your Memory. Sorry WHAT?I rest my.. case.

rb2 large gray Google is Destroying Your Memory. Sorry WHAT?

Sparrow, B. Liu, J. & Wegner , D. (2011) Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips. Science

Follow Neurobonkers on TwitterFacebookGoogle+RSS, or join the mailing list.
Tagged with:

I’d like to dedicate this post to Dr. Ben Goldacre who inspired me to begin this blog with his absolutely outstanding Bad Science Blog.

Ecstasy? I’m sorry, I thought it was Crystal Meth!

rb editors selection Bad Science: Idiots and Ecstasy

Drug “education” pamphlets routinely state ecstasy causes “brain damage” and “Parkinson’s disease”. There is no valid evidence for the former in humans and the little evidence that exists regarding the latter actually suggests the precise opposite. The Parkinson’s claim is based on a study published by researchers at Johns Hopkins University which has been retracted. The reason for the retraction was that the monkeys who the study were based on were “accidentally” injected (yes, injected) with crystal meth  instead of ecstasy. These were not the only things wrong with this study…

thestupiditburnsmini Bad Science: Idiots and Ecstasy

Due to our insane drug laws crystal meth and ecstasy could feasibly “get mixed up” on the street by an insane or idiotic drug dealer. In a lab however you’d have to be criminally incompetent to allow this mistake to happen, scientists don’t just have stacks of crystal meth and ecstacy lying around. Despite ecstasy being taken by millions every weekend the law makes it practically impossible for scientists to get their hands on it for research especially with humans. In this respect the use of non-humans is not unusual. No well controlled trials (let alone randomized controlled trials) have ever been done with humans on most illegal drugs.

monkeycage Bad Science: Idiots and Ecstasy

Another odd thing about the Johns Hopkins study was that the monkeys were injected three times with intervals of three hours. Ricuarte, the author of the study himself conducted a study in 1988 that demonstrated that injecting doubles the toxicity of MDMA however this was not discussed in this study and the title of the study actually states “a common recreational dose”. It was also a strange conclusion for the scientists to suggest that ecstacy causes Parkinson’s without addressing their staggering finding that two of their ten monkeys dropped dead before they could be given their third dose. It’s as if they think that at raves its perfectly normal for one in five ecstasy users to inject themselves with ecstasy and then promptly drop dead.

crystal meth oops findings Bad Science: Idiots and EcstasyThe dosage in the study though high compared to recreational doses, isn’t however particularly out of the ordinary for animal MDMA research. The study cites a key study by the US National Institute for Drug Addiction (NIDA) that injected monkeys with twice the dose they used, noon and night for 4 consecutive days (based on their calculation of mg per kilo, the equivalent for a 75kg human would be six grams of pure MDMA over four days, taken at breakfast and bed time. Most MDMA users probably wouldn’t see that much MDMA in their lifetimes). This dosage and timespan is so nuts that the fact that all of the monkeys in the study didn’t drop dead immediately should suggest ecstasy is a pretty safe drug. Instead the NIDA study is one of the assortment of studies used to support the claim that ecstasy causes brain damage. Of course ecstasy causes damage to animals if you inject hideous amounts of it on a twice daily basis for days on end. Neither studies bother explaining why they are injecting the monkeys rather than using the normal method of oral administration.

monkeyeyes Bad Science: Idiots and Ecstasy

These papers echo researchers in the US who famously blasted cannabis smoke at monkeys through gas masks every day for 90 days with so much smoke that many died from carbon monoxide poisoning or suffocation.

Smoking Monkey Bad Science: Idiots and Ecstasy

You may wonder why these scientists behaved so appallingly. When the World Health Organisation (WHO) was commissioned to conduct the largest ever study of cocaine use it concluded that cocaine was safer than alcohol and tobacco and generally very useful. The US representative to WHO threatened to withdraw US funding for all their research projects and interventions unless the organisation “dissociated itself from the conclusions of the study” and cancelled the publication (leaked WHO report here). Professor Nutt, the UK head of the Government’s advisory council on drugs (ACMD) was sacked for saying the exact same thing about Ecstasy.

gagged science1 Bad Science: Idiots and Ecstasy

When practically the entire ACMD resigned in outrage last year and the leaders of the medical and legal establishment came out in full support of the scientists, the Government’s reaction was to propose a bill that will remove from law any remaining remnants of their influence. The bill is currently in the final stages of becoming law. Professor Nutt’s replacement, war-on-drugs puppet Dr. Raabe was such a fruitcake that the Government had to remove him before his first day at work after after it emerged he was also at war with homosexuals, abortion, contraception and well basically everything.

raabe Bad Science: Idiots and Ecstasy

Only one randomised controlled trial of ecstasy has ever been conducted. Last year 15 PTSD patients who were extremely resistant to psychotherapy and existing medications were treated with MDMA. The study concluded that ecstasy was not only safe but extremely effective for this purpose. There has been one vaguely controlled study of ecstacy users. It was completed a month ago, independently with a $1.8m NIDA grant. It demonstrated absolutely no cognitive impairment even in heavy ecstasy users. It is still completely free and open access so get it while it’s hot. It’s important to remember however that the sample sizes in these studies are tiny, until a controlled study is conducted with a decent sample size it will be impossible to say definitively how safe ecstasy is.

Fun fact: No scientist in recorded history has EVER suggested ecstasy drains spinal fluid. A number of ecstasy studies involved draining spinal fluid of users in search of evidence of harm. Leaking spinal fluids isn’t caused by ecstasy, it’s caused by spinal taps. So will the anti-drug organisations please recall their bullshit pamphlets. This pamphlet by the Australian Federal Police for example reads like a comedy of errors written by a chimp. Ecstasy probably doesn’t kill if pure and used properly. Misinformation definitely kills. Good day.

Edit: If you think the botched Johns Hopkins paper is no longer relevant, I’d like to point out that the authors of this study are the only scientists cited to support the claim that ecstasy causes brain damage on the US Government (2010) Ecstasy “fact sheet”. The paper is an incredibly biased and uncritical review of what is mostly their own research (they cite 10 of their own papers), the review was published the year before the botched study.

Click here to read last week’s article on why a multi-billion dollar industry is paying advertising agencies  to spread misinformation about drugs.

Key References

Ricaurte, G. (2002). Severe Dopaminergic Neurotoxicity in Primates After a Common Recreational Dose Regimen of MDMA (“Ecstasy”) Science, 297 (5590), 2260-2263 DOI: 10.1126/science.1074501

Mithoefer MC, Wagner MT, Mithoefer AT, Jerome I, & Doblin R (2010). The safety and efficacy of {+/-}3,4-methylenedioxymethamphetamine-assisted psychotherapy in subjects with chronic, treatment-resistant posttraumatic stress disorder: the first randomized controlled pilot study. Journal of psychopharmacology (Oxford, England) PMID: 20643699

Halpern JH, Sherwood AR, Hudson JI, Gruber S, Kozin D, & Pope HG Jr (2011). Residual neurocognitive features of long-term ecstasy users with minimal exposure to other drugs. Addiction (Abingdon, England), 106 (4), 777-86 PMID: 21205042

Insel TR, Battaglia G, Johannessen JN, Marra S, & De Souza EB (1989). 3,4-Methylenedioxymethamphetamine (“ecstasy”) selectively destroys brain serotonin terminals in rhesus monkeys. The Journal of pharmacology and experimental therapeutics, 249 (3), 713-20 PMID: 2471824

Continue reading »

Follow Neurobonkers on TwitterFacebookGoogle+RSS, or join the mailing list.

washington times1 All Those Feisty Christian Women, Really Now.So before we continue our theme this week of lampooning fleet street’s utter failure to understand the basic principles of Science I’ll give you a break with this little gem of neanderthal level fire and brimstone I came across in the Washington Times this week. According to the article 1 in 6 women are addicted to porn. The headline should set off alarm bells right away. Who decided they were addicted?  Women in general? American women? Or just women who read a particular crackpot website perhaps?

NB: in many crackpot reports the working definition of “addiction” is just “use of X”. Thich explains in one sentence a lot of strange stats you may have seen refering to every type of addiction. (The case for this haphazard use of the word is supposedly that when you just go up and ask someone who uses “x” whether they are addicted to porn, alcohol cannabis or whatever, the standard response is “f*** off”.) The academically accepted definition however, requires distress of the individual sooner or later.

So after a brief look in to the referencing of this study we can see that this earth shattering finding was found by… no, not a study but a survey by “Today’s Christian Woman” presumably of Christian women (we are left to guess however because none of these crucial facts are  included in this three page long abomination, oh and it was conducted erm 7 years ago (What the hell, thats not even news!) Not only that but outside of this news paper article I can’t find any evidence anywhere on the internet that this study ever existed. It certainly doesn’t seem to have been cited by any other reputable media in the erm 7 years since it was published. There are a couple of mags and blogs with similar titles but they don’t seem to have any evidence of this survey. Which is a real pity because I was really looking forward to hypothesising why responders to a survery on a website for Christian mothers had such a massively higher level of pent up sexual angst than the average woman. (Put the pitch forks away that’s called a joke, if you didn’t get that you might as well sod off right now).

praying All Those Feisty Christian Women, Really Now.

OK, well I could go on all day about the methodological and statistical catastrophes in this article but that would be missing the wood for the trees. (Why don’t you try it for yourself, this ones a text book case. TIP: Find who conducted the other study in this article and check what they actually found?)

The article goes on to cite a Mary-Anne-Laydon, a professor (please god no) of women’s studies at Wheelock College in Boston as saying (as if it were definitive fact):

“The more pornography women use, the more likely they are to be victims of non-consensual sex”

The article then continues on to chat about some other random bullshit without even attempting to address this second earth shattering statement in this story. Seriously, did she just say that? Yes. This illustrates superbly an extraordinarily basic  principle that newspapers in general don’t seem to get….

pirates All Those Feisty Christian Women, Really Now.

Correlation ≠ Causation

Obviously to any rational being there are most probably dozens of uncontrolled factors at work. Without some earth shattering evidence to the contrary, the fact that one bullshit study from a source too obscure to bother citing in a three page text that found that some women who were raped watched porn and some other women who said they didn’t watch porn didn’t get raped tells you absolutely, categorically, sweet, f*** all. It’s absolutely irrelevant to anything and everything ever.

Good day to you.

Follow Neurobonkers on TwitterFacebookGoogle+RSS, or join the mailing list.
Tagged with:

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Visit our friends!

A few highly recommended friends...