Good science is never defensive. No matter how strongly we believe something to be fact, we should always welcome challenges. Reports of scientists seeking to squash any alternate views, no matter how bizarre, sadden me. Debate gives us the opportunity to explain, educate, and possibly think about science in new ways.

In some cases, original reports that go unquestioned can lead to serious outcomes for the public. Such appears to be the case for Study 329, published by Martin Keller and colleagues in the Journal of the American Academy of Child and Adolescent Psychiatry. Study 329 was an efficacy study for the use of paroxetine (Paxil) for treating depression in adolescents. The study, funded by pharmaceutical giant SmithKline Beecham (now GlaxoSmithKline) has been cited in 572 other papers.

The paper had a rocky route to publication from the outset. When paroxetine failed to show any advantages over placebo in the eight initial outcome measures, the researchers revisited the trial using 19 secondary outcome measures, only four of which showed any significant effects of the drug. Internal documents released by SmithKline during a lawsuit in 2004 noted that failure to show efficacy would be “commercially unacceptable,” and they contracted with a medical communications firm to revise the paper. The revised paper was sent to the prestigious Journal of the American Medical Association (JAMA), which correctly turned it down. JAACAP accepted a further revision of the paper over the objections of its peer reviewers. Over 2 million prescriptions were written for American youth in 2002, but a 2003 FDA black box warning about suicidal ideation and other problems with selective serotonin reuptake inhibitors (SSRIs) such as paroxetine has hopefully made physicians more cautious.

A recent replication of Study 329 appearing in the British Medical Journal found that neither paroxetine or another commonly used antidepressent imipramine (which also appeared in Study 329) produced any benefits over placebo for teens. Not only were the drugs ineffective, but their adverse effects were more severe than the original study disclosed. The original study classified behaviors like cutting and jumping from heights as “emotional lability” instead of the truly self-harm behaviors they are.

The point of revisiting these studies is not to say that all science is bad, of course, and the last thing we need right now is to feed that idea in the mind of the public. Instead, all we need to do is practice what we preach when it comes to the critical thinking skills that we try so hard to share with our students. Everyone in the pathway leading to the publication of Study 329, from researcher to reviewer to editor, could have avoided its publication by simply following good rules of critical thinking.


9 Comments

chunsdor · September 23, 2015 at 12:56 pm

As a Biology major, I have many labs with projects and lab assignments. Along with these assignments comes statistical analysis, and when something does not match up, it is always tempting to try to find a variable or measurement to play with to prove your hypothesis true. While I have not turned in an assignment with skewed data, I have played around with numbers and found that in many cases it is very easy to subtly change data to fit your desired results. As Dr. Freberg stated above, it is important to have critical thinking skills when performing research and reading findings to prevent corrupted data from becoming scientifically accepted. On a related note, I recently read an article on a statistics website called fivethirtyeight that examined how often and easy it is for data to be manipulated. Here’s the link if anyone is interested: http://fivethirtyeight.com/features/science-isnt-broken/

mordanza · September 25, 2015 at 12:11 pm

I think that this problem of deception, especially with big companies, is more common than the public knows, or even wants to know. I completely agree with the idea that scientific findings should be debated, not only for a well rounded understanding but also as a means of checking for skewed data, as highlighted in the blog. Generally, big companies like to make money and big pharmaceutical companies like to have working products (because they spend so much money creating it). This blog showed how, even slightly, deceptive these companies can be, cherry-picking their data to express to the public.. So I took this as a reminder to be careful and educated in choosing over the counter, even prescribed, medication.

Laura Freberg · September 27, 2015 at 12:53 pm

When a researcher keeps searching until something (anything!) is significant, as SmithKline appears to have done after their initial outcome measures were insignificant, we often refer to the process as “p-hacking.”

The temptation to use p-hacking when you’re working for a commercial organization like a pharmaceutical company or as a “publish or perish” professor is probably pretty high. This article shows how you can test for p-hacking using meta-analyses: http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002106

mmontna · October 13, 2015 at 6:12 pm

Reading about companies and corporations playing with statistics, or “p-hacking” as you said, makes me want to go into research even more than I had wanted before. I am currently a Clinical Assistant for a local Adolescent Psychiatrist and often when I tell people that I get comments about “how horrible it is to be medicating youth”. Many people don’t understand the function and need for medication when it comes to psychological health, and studies like this reassure their doubts and disapproval for pharmaceutical use in adolescents. I still see prescriptions for Paxil while working, and if it is ineffective in adolescents there should be great care in prescribing it due to the side effects. As shown, it is easy enough to sway statistics in a way that favors the outcome you desire. As the general public, we need to not let ourselves/opinions be swayed by a single study, we need to follow up on information. And that is one of the great things about science, the constant desire to test and prove.

Laura Freberg · October 24, 2015 at 2:05 pm

I just saw a report in my daily news that says that one out of four American adult women takes a psychotherapeutic drug. Wow! I know that some lives have been vastly changed for the better by the use of medication for psychological disorders, but one out of four pushes the limits of our definitions of psychopathology as “unusual” ways of being.

jennylu18 · October 25, 2015 at 6:40 pm

Bringing up a new perspective towards the same cause, I saw this relating to some of the business classes I’m taking towards my major. On the business ethics side, if the company knew they were using biased data, it should be management’s job to fix and prevent this. In Organizational behavior, I learned that dangers of Groupthink Bias and if this contributed to the individuals in the company, there should have been a sort of “devil’s advocate” warning them what could and did happen when the public found out about the biased studies.

Sophie Marsh · October 29, 2015 at 9:48 am

Your class was the first time I was introduced to the phrase “Good science is never defensive” and I think that is going to guide the way I view research. This article reminded me of a controversy I heard of that has been brewing in the psychological world recently (see article below). They have found that approximately 1 in 8 psychological research papers contains errors – large enough that they effect the conclusion of the study. Without critical thinking and questioning of these studies, psychology as a discipline may fall down a path of erroneous data. This unfolding data shows how important it is to view “good science” as never being defensive.

http://www.nltimes.nl/2015/10/29/one-in-eight-psychology-research-papers-contain-massive-errors/

Ariana Altman · December 1, 2015 at 8:04 pm

After reading this post it made me take a step back and realize that this has probably occurred more than it should have. There have probably been a hand full if not more studies or papers that have been published that have had faulty information in it (both intentional or unintentional). I think that brings up a good point, that in science we need to be especially careful and critical before publishing things for the public. Of course we don’t live in a perfect world and mistakes are made, but when it comes to stuff like this there are a lot of ethical considerations and issues that must be realized. When conducting research one must be thorough and careful as to meet guidelines and this is greatly enforced, the same should be done for the following step which is the sharing of the information collected. Critical thinking is a very important skill that is drilled into our heads as soon as we hit elementary school. It’s definitely a skill that needs to be exercised and maintained especially when it comes to things like this.

vimorris@calpoly.edu · May 8, 2016 at 3:44 pm

I often wonder what is considered a norm that I engage in today that will be advised against in the years to come when further research reveals its harm. For my children and the future generations I will be thankful someone was willing to challenge what can often be assumed definite in its understanding. I agree with Dr. Freberg as she mentions her hope that all will use critical thinking when approaching any topic or discussion in science field and I would continue to go on to say the same with all things in life too.

Leave a Reply

Avatar placeholder