Below I am providing the first assignment in a new Metascience course I am teaching at Eindhoven University of Technology. The goal of the assignment is to teach students to critically evaluate claims scientists make.
Assignment 1: Andrew Huberman vs. Decoding the
Gurus
Andrew
Huberman is an associate professor at Stanford University who hosts a popular
podcast called ‘Huberman Lab’. His podcast is one of the most listened-to
podcasts in the world, and he has more than 5 million subscribers on YouTube
and more than 6 million followers on Instagram. He discusses science and
science-based tools for everyday life, focusing on physical and mental health. Before
starting the main part of this assignment, answer the following two questions.
Question 1:
a) Which
factors increase your trust in Andrew Huberman as a reliable source of
information on topics surrounding physical and mental health?
b) Which
factors decrease your trust?
Feel free
to use the internet to form an opinion.
Question 2:
On a
scale from 1 (not at all reliable) to 10 (extremely reliable), how reliable do
you consider Andrew Huberman to be as a source of information on topics
surrounding physical and mental health?
As
indicated on Wikipedia, Andrew Huberman’s podcast “has attracted
criticism for promoting poorly supported health claims”. In this assignment,
you will reflect on whether and why Andrew Huberman promotes poorly supported
health claims. More generally, you will reflect on a number of factors that can
help you to evaluate if information people provide about scientific findings is
reliable.
The study
material for this assignment is podcast episode 85 of “Decoding the Gurus” by Christopher
Kavanagh and Matthew Browne called “Andrew Huberman and Peter Attia:
Self-enhancement, supplements & doughnuts?” released on the 9th of November
2023.
You can
listen to the episode here: https://decoding-the-gurus.captivate.fm/episode/andrew-huberman-and-peter-attia-optimising-your-pizza-binges. Note that most Decoding the Gurus
episodes are very long. The section you need to listen to for this episode
starts at 1 hour, 46 minutes, and 50 seconds. If you listen to the end, it will
take 1 hour and 26 minutes. Before you listen, read through the questions you will have to
answer about the podcast below (especially question 5).
Although it
is not necessary to read this information, the paper Huberman discusses is: https://www.biorxiv.org/content/10.1101/2022.07.15.500226v2 The paper was published 2 years later, but it is in a
journal we do not have access to because the subscription fees are too high, so
we can not read the final version of the scientific research these authors did
(a good reminder why open access publication is important).
Question 3:
Which
criteria for the quality of scientific research does Andrew Huberman rely on? In
the episode he remarks how the study is not peer reviewed, and in other
episodes he often discusses whether a study appeared in a peer reviewed journal
(and sometimes if the journal is considered prestigious). Do you think this is
a good criterion of scientific quality? Which aspects make this a good
criterion? Which aspects do not make this a good criterion?
a) I believe the following aspects make this a good criterion:
b) I believe the following aspects do not make this a good criterion:
c) My overall evaluation about whether a study being peer reviewed or not
is a good criterion for scientific quality is:
Question 4:
Another
criterion Andrew Huberman uses to evaluate whether a finding can be trusted is
if there are multiple published articles that show a similar effect. Which
aspects make this a good criterion? Which aspects do not make this a good
criterion? The section in the textbook on publication bias might help to
reflect on this question: https://lakens.github.io/statistical_inferences/12-bias.html#sec-publicationbias
a) I believe the following aspects make this a good criterion:
b) I believe the following aspects do not make this a good criterion:
c) My overall evaluation about whether the presence of multiple studies in
the literature is a good criterion for scientific quality is:
Question 5:
a) Which
criticisms do Christopher Kavanagh and Matthew Browne raise of the study
Huberman discusses?
b) Which
criticisms do the podcast hosts raise about how Huberman presents the study?
c) Which
warning signs of the past studies by the same lab do the podcast hosts raise?
Question 6:
The
podcast hosts discuss the ‘dead salmon’ study. I agree with podcast host Christopher
Kavanagh that people interested in metascience should know about this study. It
lead to lasting changes in the data analysis of fMRI studies. A similar point
was made in a full paper, which you can read here. The title of the paper is “Puzzlingly
High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition”.
The original title of this paper when submitted to the journal was ““Voodoo
Correlations in Social Neuroscience”. The peer reviewers did not like this
title, and the authors had to change it before publication, but it is still
often referred to as the ‘voodoo correlations’ paper, together with the ‘dead
salmon’ poster. Read through the study (which was presented as a poster at a
conference, not as a full paper). It is not intended as a serious paper. What
is the main point of the poster? A high-resolution version is available here.
Question 7:
Huberman
discusses the power analysis of the study, but does not criticize it. Below,
you can find the power analysis in the original study. The authors plan to
detect an effect of d = 0.69, which is as large as the effect of reward
learning observed in an earlier study. The following two questions are
difficult, and there is not a lot of accessible reading material in the
literature yet to help you. Some information to help you can be found in https://lakens.github.io/statistical_inferences/06-effectsize.html#interpreting-effect-sizes and the references in this section,
and https://lakens.github.io/statistical_inferences/06-effectsize.html#interpreting-effect-sizes and the references in this section.
a) How
plausible do you think it is that the placebo effect would have an effect size
as large as the effect for reward learning?
b) How
large should an effect be for an individual to be aware of it?
Question 8:
a) Do
you think Andrew Huberman is overclaiming in the end of the podcast about
possible applications of this effect? Is he overhyping?
b) How
do you think the studies should have been communicated to a general audience?
Question 9:
It is not
possible to ask the following question in any other way, than to make it a loaded question. It is clear what I think about
this topic, as I chose to make this assignment. Nevertheless, feel free to
disagree with my beliefs.
a) Is Andrew
Huberman’s understanding of statistics (and red flags where reading the results
of a study) strong enough to adequately weigh the evidence in studies?
b) How
well should science communicators be able to interpret the evidence underlying
scientific claims in the literature, for example through adequate training in
research methods and statistics?
c) How
well should you be trained in research methods and statistics to be able to
weigh the evidence in research yourself?
Question 10:
After
completing the assignment, we will revisit question 2 by asking you once more: On
a scale from 1 (not at all reliable) to 10 (extremely reliable), how reliable
do you consider Andrew Huberman to be as a source of information on topics
surrounding physical and mental health?
Further reading
and listening
Additional
episodes by Decoding the Gurus on Andrew Huberman:
Episode 81:
Andrew Huberman: Forest Bathing in Negative Ions https://decoding-the-gurus.captivate.fm/episode/andrew-huberman-forest-bathing-in-negative-ions (This starts with some kind words
about our own podcast, Nullius in Verba).
Episode 90:
Mini-Decoding: Huberman on the Vaccine-Autism Controversy https://decoding-the-gurus.captivate.fm/episode/mini-decoding-huberman-on-vaccine-autism-controversy
In Dutch,
see https://www.youtube.com/watch?v=KHnQK6wliJU for extra information.