Scarlet & Grey
Ohio State University
School of Music

Sixty Methodological Potholes

by David Huron

The following table identifies a number of fallacies, problems, biases, and effects that scholars have, over the centuries, recognized as confounding the conduct of good research. Note that some of these "methodological potholes" remain contentious among some scholars.

Problem Remedy/Advice
ad hominem argument Criticizing the person rather than criticizing the argument. Focus on the quality of the argument. Be prepared to learn from people you don't like. Avoid construing a debate as a personal or social fight.
discovery fallacy Criticizing an idea because of its origin (for example, an idea given in a religious text). Example. Criticize the justifications offered in support of an idea rather than how the idea originated.
ipse dixit Appealing to an authority figure in support of an argument. Example. Cite published research rather than identifying authority figures. Provide references so others can judge the quality of the supporting research for themselves. Recognize that experts are fallible.
ad baculum argument Appealing to physical or psychological threat. More. Do not threaten.
egocentric bias The tendency to assume that other people experience things the same way we do. Don't rely exclusively on introspection. Listen carefully to what others report. Carry out a survey or run an experiment in order to observe the behaviors of others. Be wary when generalizing from your own experiences.
cultural bias The inappropriate application of a concept to people from another culture. Talk with culturally knowledgeable people. Carry out cross-cultural experiments. Listen carefully in post-experiment debriefings.
cultural ignorance The failure to make a distinction that people in another culture readily make. Talk with culturally knowledgeable people. Listen carefully in post-experiment debriefings.
over-generalization The tendency to assume that a research result generalizes to a wide variety of real-world situations. Be circumspect when describing results. Look for converging evidence. Ask other people's opinions. Analyze additional works. Run further experiments.
inertia fallacy The idea that research consistent with a particular conclusion will "grow" in the future. A subtle fallacy that is evident in such statements as "Research is increasingly showing that ...". Future research is just as likely to over-turn a current theory as to confirm it. Research results do not have inertia. Talk about research results in the past tense ("Research has shown ..." rather than "Research is showing ..."). Avoid "growth" or "band-wagon" metaphors when describing the evidence pertaining to some theory.
relativist fallacy The belief that no idea, hypothesis, theory or belief is better than another. Example. Avoid "absolute" relativism; the world appears to be "relatively relative." Don't mistake relativism for pluralism.
universalist phobia A prejudice against the possibility of cross-cultural universals. Example. Familiarize yourself with the music from a variety of cultures. Investigate notions of similarity and difference. Use cross-cultural surveys or experiments where appropriate.
problem of induction The problem (identified by Hume) that no number of particular observations can establish the truth of some general conclusion. More. Avoid claiming you know the truth. Present your research results as "consistent" or "inconsistent" with a particular theory, hypothesis or interpretation.
positivist fallacy The problem arising when a phenomenon is deemed not to exist because no evidence is available: "Absence of evidence is interpreted as evidence of absence." Example. Recognize that not all phenomena leave obvious evidence of their existence.
confirmation bias The tendency to see events as conforming to some interpretation, hypothesis, or theory while viewing falsifying events as "exceptions". Be systematic in your observations. When counting examples that either confirm or contradict your theory, do not change the counting criteria as you go in order to exclude some contradicting instances. Establish the qualifying criteria before you start counting.
hindsight bias The ease with which people confidently interpret or explain any set of existing data. Examples. Whenever possible, attempt to predict observations in advance. Aim to test ideas rather than to look for confirmation.
unfalsifiable hypothesis The formulation of a theory, hypothesis or interpretation which cannot be, in principle, falsified. More. Whenever possible, formulate theories, hypotheses or interpretations so they are, in principle, falsifiable. Identify the sorts of observations that would be inconsistent with your views.
post-hoc hypothesis Following data collection, the formulation and testing of additional hypotheses not envisaged before the data were collected. Example. Limit. Beware of hindsight bias and multiple tests. Collect new data; analyze additional works.
smorgasbord thinking Having enough hypotheses to explain all possibilities. More. Don't deceive yourself that you have only one prediction. Write your prediction down before you analyse any data. Ask yourself whether you have an explanation should the data show a reverse trend; if so, recognize that this is bad.
ad-hoc hypothesis The proposing of a supplementary hypothesis that is intended to explain why a favorite theory or interpretation failed an experimental test. Example. Open to grave abuse. Try to avoid. Test the ad hoc hypothesis in a separate follow-up study.
sensitivity syndrome The tendency to try to interpret every perturbation in a data set; a failure to recognize that data always contains some "noise". Use test-retest and other techniques to estimate the margin of error for any collected data. Report chance levels, p values, effect sizes. Beware of hindsight bias.
positive results bias A bias commonly shown by scholarly journals to publish only studies that demonstrate positive results (i.e., where data and theory agree). Example. Seek replications for suspect phenomena. Be aware of possible "bottom-drawer effect".
bottom-drawer effect Unawareness of unpublished negative results of earlier studies. A consequence of positive results bias. Example. Maintain contact and communicate within a scholarly community. Ask other scholars whether they have carried out a given analysis, survey or experiment. Widely report negative results through informal channels.
head-in-the-sand syndrome The failure to test important theories, assumptions, or hypotheses that are readily testable. Example. Be willing to test ideas that everyone presumes are true. Ignore criticisms that you are merely confirming the obvious. Collect pertinent data. Carry out analyses. Do a survey. Run an experiment.
data neglect The tendency to ignore readily available information when assessing theories, assumptions or hypotheses. Example. Don't ignore existing resources. Test your hypotheses using other available data sets.
research hoarding The failure to make the fruits of your scholarship available for the benefit of others. More. Publish often. Prefer to write short research articles rather than books. Make your data available to others.
double-use data The use of a single data set both to formulate a theory and to "independently" test the theory. More. Avoid. Collect new data.
skills neglect The human disposition to resist learning new scholarly methods that may be pertinent to a research problem. Resist scholarly laziness. Engage in continuing education. Learn things your peers and teachers don't know. Don't assume you received a thorough education.
control failure The failure to contrast an experimental group with a control group. Example. Add a control group.
third variable problem The presumption that two correlated variables are causally linked; such links may arise through an unknown third variable. More. Avoid interpreting correlation as causality. Carry out an experiment where manipulating variables can test notions of probable causality.
reification Falsely concretizing an abstract concept (e.g. regarding spatial representations of pitch structure as mental representations). Take care with terminology.
validity problem When an operational definition of a variable fails to accurately reflect the true theoretical meaning of the variable (See Cozby, p.31). Think carefully when forming operational definitions. Use more than one operational definition. Seek converging evidence.
anti-operationalizing problem The tendency to raise perpetual objections to all operational definitions. More. Propose better operational definitions. Seek converging evidence using several alternative operational definitions.
problem of ecological validity The problem of generalizing results from controlled experiments to real-world contexts. Seek convering evidence between controlled experiments and experiments in real-world settings.
naturalist fallacy The belief that what IS is what OUGHT to be. Example. Imagine desirable alternatives.
presumptive representation The practice of representing others to themselves. (Natoli, 1997; p.151). Example. Exercise care when portraying or summarizing the views of others -- especially when your portrayal causes a disadvantaged group to lose power.
exclusion problem The tendency to prematurely exclude competing views. (Natoli, 1997; p.151). Remember that "no theory is every truly dead." (Popper)
contradiction blindness The failure to take contradictions seriously. Attend to possible contradictions.
multiple tests If a statistical test relies on a 0.05 confidence level, then, on average, a spuriously significant result will occur for each 20 tests performed. Example. Avoid excessive numbers of tests for a given data set. Use statistical techniques to compensate for multiple tests. Split large data sets into one or more "reserved sets" or "training sets". Prefer hypothesis testing over open-ended chasing after significance. Report the actual number of tests performed in a study.
overfitting Excessive fine-tuning of one's hypotheses or theories to one particular data set or group of observations. The theories that arise from overfitting describe noise rather than real phenomena. See also sensitivity syndrome. Recognize that samples or observations typically differ in detail. In forming theories, continue to collect new data sets or observations to avoid overfitting.
magnitude blindness The tendency to become preoccupied with statistically significant results that nevertheless have a small magnitude of effect. Aim to uncover the most important factors influencing a phenomenon first.
regression artifacts The tendency to interpret regression toward the mean as an experimental phenomenon. Don't use extreme values as a sampling criterion. Use a control group (such as scrambling orders) to compare with the experimental group.
range restriction effect Failure to vary an independent variable over a sufficient range of values -- with the consequence that the effect size looks small. Decide what range of a variable or what effect size is of interest. Run a pilot study.
ceiling effect When a task is so easy that the experimental manipulation shows little/no effect. Make the task more difficult. Run a pilot study.
floor effect When a task is so difficult that the experimental manipulation shows little/no effect. Make the task easier. Run a pilot study.
sampling bias Any confound that causes the sample to not be representative of the pertinent population. Use random sampling. If there are identifiable sub-groups use a stratified random sample. Where possible, avoid "convenience" or haphazard sampling.
homogeneity bias Failure to recognize that sub-groups within a sample respond differently. For example, where responses diverge between males and females, or between Germans and Dutch. Use descriptive methods and data exploration methods to examine the experimental results. Use cluster analysis methods where appropriate.
cohort bias or cohort effect Differences between age groups in a cross-sectional study that are due to generational differences rather than due to the experimental manipulation. More. Use a more narrow range of ages. Use a longitudinal design instead of a cross-sectional design.
expectancy effect Any unconscious or conscious cues that convey to the subject how the experimenter wants them to respond. Expecting someone to behave in a particular way has been shown to promote the expected behavior. More. Use standardized interactions with subjects. Use automated data-gathering methods. Use double-blind protocol.
placebo effect The positive or negative response arising from the subject's belief about the efficacy of some manipulation. More. Use a placebo control group.
demand characteristics Any aspect of an experiment that might inform subjects of the purpose of the study. More. Control demand characteristics by: (1) using deception (for example, by adding "filler" questions that make it more difficult for subjects to infer the experimental question), (2) debriefing subjects at the end of the experiment, (3) using field observation, (4) avoiding within-subjects designs where all subjects are aware of all the experimental conditions. (5) asking subjects not to discuss the experiment with future participants.
reactivity problem When the act of measuring something changes the measurement itself. (See Cozby, p.33) Use clandestine measurement methods.
history effect Any change between a pretest measure and posttest measure that is not attributable to the experimental manipulation. Isolate subjects from external information. Use post-experiment debriefing to identify possible confounds.
maturation confounds Any changes in responses due to changes in the subject not related to the experimental manipulation. Examples of maturation changes include increasing boredom, becoming hungry, and (for longer experiments) reduced reaction times, fading beauty, becoming wiser, etc. (See Cozby, p.68) Prefer short experiments. Provide breaks. Run a pilot study.
testing effect In a pretest-posttest design, where a pre-test causes subjects to behave differently. (See Cozby, p.69). Use clandestine measurement methods. Use a control group with no manipulation between pre- and post-test.
carry-over effect When the effects of one treatment are still present when the next treatment is given. (See Cozby, p.281) Leave lots of time between treatments. Use between-subjects design.
order effect In a repeated measures design, the effect that the order of introducing treatment has on the dependent variable. More. Randomize or counter-balance treatment order. Use between-subjects design.
mortality problem In a longitudinal study, the bias introduced by some subjects disappearing from the sample. Example. Convince subjects to continue; investigate possible differences between continuing and non-continuing subjects.
premature reduction The tendency to rush into an experiment without first familiarizing yourself with a complex phenomena. Use descriptive and qualitative methods to explore a complex phenomenon. Use explorative information to help form testable hypotheses and to identify plausible confounds that need to be controlled.
spelunking "Exploring" a phenomenon without ever testing a proper hypothesis or theory. Don't just describe. Look for underlying patterns that might lead to "generalized" knowledge. Formulate and test hypotheses.
shifting population problem The tendency to reconceive of a sample as representing a different population than originally thought. Write-down in advance what you think is the population.
instrument decay Changes of measurement over time due to fatigue, increased observational skill, or changes of observational standards. Example. Use a pilot study to establish observational standards and develop skill.
reliability problem When various measures or judgments are inconsistent. Solutions: (1) careful training of experimenter, (2) careful attention to instrumentation, (3) measure reliability, and avoid interpreting affects smaller than the error bars.
hypocrisy Holding others to a higher methodological standard than oneself. Employ higher standards than others. Be as thorough as possible in your self-criticism. Follow your own advice.

© Copyright David Huron, 2000, 2001.
This document is available at