The Wansink Dossier: An Overview

On November 21, 2016, Brian Wansink (BW) published a blog post “The Grad Student Who Never Said ‘No’” (note: BW has deleted the post, so I had to link to a cached version), which received heavy criticism for its unconventional career advice as well as the describing questionable research methods. Before I will go into more detail regarding these questionable research practices, here is some more background about BW:

Wansink is professor at Cornell University and it can be said that he is a high-profile researcher with an impressive track record. He is a ‘world-renowned eating behavior expert for over 25 years‘, White House appointed to lead the US Dietary Guidelines (2007-2009), keynote speaker at conventions around the world, author of the bestselling books Mindless Eating and Slim by Design. As an academic he has published hundreds of papers, which have been cited over 20.000 times, and he has a very impressive h-index of 72.

A lot has happened since BW posted the above mentioned blog post in November 2016. The short version is that we (Nick Brown, Jordan Anaya, and me) privately contacted him about a wide range of errors and inconsistencies (more on that later) that we spotted in the four papers that he mentioned in the blogpost. Although he initially responded to our emails, and even offered me to co-author a paper with him (?), he stopped replying after it was made clear that we wanted to see the data to assess the source of the apparent inconsistencies in his papers.

As BW stopped replying, we were forced to go public. After all, we have a communal responsibility to assess and verify the veracity of the scientific literature. As such, we published a pre-print detailing the more than 150 errors in the 4 papers. This report has now been published at BMC Nutrition. Wansink and his colleagues responded with a research statement, shared the data underlying these 4 papers, and issued some corrections. The released dataset and the issued corrections also contained similar inconsistencies, which we outlined in a second pre-print. Wansink, his co-authors, nor Cornell University has responded to these errors. We proceeded to check more papers, and there is a now ever-increasing list of research articles (co-)authored by Brian Wansink which have been criticized for containing serious errors, reporting inconsistencies, impossibilities, plagiarism, and data duplications.

The remainder of the post is dedicated to outlining these errors. As the scope of this post has rapidly increased in a short time, I have split it into multiple sections.

Brief Summary

To the best of my knowledge, there are currently:

  • 52 publications from Brian Wansink which (are alleged to) contain minor to very serious issues,
  • which have been cited over 4000 times,
  • are published in over 25 different journals, and in 8 books,
  • spanning over 20 years of research,
  • 5 articles have been retracted, and
  • 14 articles have been corrected.

This is not an exhaustive list, but only what has been reported on so far.

Goal of this post

This post aims to provide an up-to-date list of these articles with a brief description of the critiques. Although I am personally involved in some of these investigations, I cannot take responsibility for the veracity of the critiques listed here. Nevertheless I aim to only report what I believe is justified, so please comment when you spot any errors. Importantly, this is not a witch hunt nor a personal attack, but an effort to do my part in improving the quality of the scientific literature. Feedback is welcome!

UPDATE: Be sure to also read the two papers we published on this investigation:

  1. Statistical heartburn: An attempt to digest four pizza publications from the Cornell Food and Brand Lab (published at BMC Nutrition)
  2. Statistical infarction: A postmortem of the Cornell Food and Brand Lab pizza publications (pre-print at PeerJ)

As well as this report by Eric Robinson with similar conclusions regarding the (highly) questionable nature of the discussed research:

  1. The science behind Smarter Lunchrooms (pre-print at PeerJ)

Corrections and Retractions

Table summarizing the type and severity of the inconsistencies

Detailed list of individual papers with errors


14 thoughts on “The Wansink Dossier: An Overview

  1. Consider sending your report to the integrity officer of the university of Wansink, if you not already did so.

  2. This is extensive work. This must have taken you a very long time. It is indeed a pretty damning report. Academic malpractice needs to be detected so your readers are thankful for your post.

    Can I ask you for your motivations to publish this detailed, targeted analysis here? Is this part of your research (i.e. are you funded to do this kind of research, which would explain in part the generous amount of time and focus allocated to it?) One suspects the problem is both systemic and endemic and not unique to a particular individual. Apart from those briefly stated, what was the selection criteria that led you to dig deeply into Wansink’s publications? Was Wansink and his co-authors contacted about this study for comment? Were Wansink et al’s editors contacted? COPE?

    One cannot help wondering what would be the most ethical, collegial way to denounce academic malpractice post hoc over time.

    • It has indeed taken quite some time and effort, although this blog post is just a compilation of already existing research. I am not paid or rewarded in any way for these efforts, nor are any of the other persons who are actively investigating Wansink. So why do we do it? Personally, because I think it is important that the scientific literature is accurate. It is often said that science is self-correcting; but this is only true because scientists put in effort to accomplish this. This is my why of contributing, in addition to my regular research.

      Wansink got our attention because he more or less self admitted to using a wide range of questionable research practices on his own blog, found here. After inspecting some of the papers he himself describes there, I spotted a range of irregularities. On Twitter I found out about others who were doing the same thing, so we joined forces and wrote a pre-print describing 150 errors in just 4 papers. I do not particularly care about Wansink’s research, but insofar as my aim is to help making science more accurate his work is merely ‘low hanging fruit’.

      Long before we started public communications we attempted to communicate in private with Wansink. We asked him to share his data (which is was supposed to do given the publication guidelines) but he refused to. He did offer me to become co-author on a paper with him (whether that was genuine or some kind of bribe I do not know). After he stopped responding we contacted Cornell Universities’s Office of Research Integrity and Assurance (ORIA) and the Institutional Research Board (IRB). They replied only very briefly, stating that they supported “open inquiry and vigorous scientific debate” but that everything was up to the researchers to decide whether they want to share data or not. They never commented on the massive amounts of errors we found in just 4 papers. You can read more about that in one of my earlier posts here.

      Communications with journal editors seems to have become much more fruitful as of late, with many of them responding that they will further investigate the papers.

      Like you, I also wonder what the most ethical way is to denounce academic malpractice. I hope that my efforts are considered to be a good way to do so. If not, please do provide feedback on how I can do it in a better way.

    • Thanks Lielais. They indeed replied to the issues raised in the 4 pizza papers. Like you said, they have not responded to still increasing list of papers which contain similar and other kind of issues. To be honest I am somewhat baffled by their response; while it does make sense from a PR perspective they do not seem to take this all very seriously yet, as they downplay the critiques by referring to them as “raised questions” or focus only on self-plagiarism (which often is considered to be less severe than statistical errors and, say, impossible data/claims).

  3. Whomever has been funding his work should take notice of these findings. It is my understanding he received a large non-competitive award from USDA — I think it was over $5 million. I also fault academia in general for pressuring faculty to publish, publish, publish. I believe is was plausible explanation for why he did this. Quantity matters more than quality.

  4. De volkskrant heeft het over Diederik Stapel Daar heeft men niets van geleerd, ook niet in Nederland.
    Want Als je klachten hebt over een onderzoek moet je nog steeds bij de onderzoekende universiteit of instituut zijn.
    Het onderzoek naar zindelijkheid is jammer genoeg van het internet verdwenen. Maar was zo slecht dat het tegenwoordig heel normaal is dat kinderen pas met 4 jaar zindelijk zijn. Terwijl kinderen vroeger met 1,5 jaar zindelijk waren. En men in Belgie hele andere resultaten heeft.
    Maar TNO stuurt het gewoon alleen door naar de falende wetenschappers.

  5. I may have been one of the first to challenge a paper by Dr. Wansink, in my web document Debunking a Shoddy “Research” Study. (9.67 Degrees Of Deception.), April 2014. Wansink’s paper, “Eyes in the Aisles” was then widely reported (favorably) in the media, without critical scrutiny. Yet Wansink’s paper was flawed in so many ways that an undergraduate science student could have exposed it as a case of careless research methodology and misleading analysis. I was aware then of the flaws in the data analysis, but considered them minor compared to the paper’s other more serious flaws. One of the most serious errors was a factor of 2 error in a bogus calculation of the elevation angle of the eyes of cartoon characters. When I first read Wansink’s paper I thought it was a deliberate joke, spoof or parody of “scientific studies”. I still wonder why it has taken so long to “blow the whistle” on this so-called research, which gives science a bad image. I politely requested the raw data from Wansink. He refused, saying that the data was still being analyzed for future publications. Subsequent inquires to Wansink went unanswered.

    • This sounds exactly like our experience. In many cases I suspect that data exists somewhere, but it bears very little relation to the published article. It’s as if Wansink conducts a study, then uses it as the basis to write a novel that loosely follows the same narrative, without being too concerned about what actually happened.

      In this video Wansink says that the eyes are looking down at 4 degrees, not the 9.67 degrees mentioned in the article. This kind of error makes very little sense to me unless the actual research is supremely unimportant to him. Compare the JAMA Pediatrics article (now retracted) in which, in the initial version, the conclusion was that a study conducted with 8-11 year olds (or, as Wansink now claims, he *believed* was conducted with 8-11 year olds) showed that the effect had been demonstrated in “preliterate” children. It is very tempting to believe that almost nothing is backed up by facts, and the key aim is to tell a good story each time.

    • It’s not clear that Benford’s Law is always appropriate for the kinds of data here. However, in a couple of cases we have found some troubling patterns in the *final* digits of some variables, when these ought to be random. For example, if you measure something like reaction times in the 30-50 second range to two decimal places, the digits for the final decimal place ought to be uniformly distributed, or perhaps there might be a tiny “Benford-like” effect giving you slightly more 0s and 1s. But if you find a lot of 5s and 6s, and a lot fewer 0s than expected, it could be that the numbers have been made up by a human, because people are not good at making up numbers that look random (zeroes are “too round”). In one case (currently with the journal editor) the distribution of the trailing digits gives a p value of .00002.

Leave a Comment