Repeatedly, Brooks won’t tell us how much!


Always look for this problem: In this morning’s New York Times, David Brooks does his annual column about social science research.

This column showcases a problem which often has us tearing our hair. The problem appears in Brooks’ first example of top social science research (italics by Brooks):
BROOKS (12/11/12): Organic foods may make you less generous. In a study published in Social Psychology and Personality Science, Kendall J. Eskine had people look at organic foods, comfort foods or a group of control foods. Those who viewed organic foods subsequently volunteered less time to help a needy stranger and they judged moral transgressions more harshly.
Interesting! Those who viewed organic foods subsequently volunteered less time to help a needy stranger! That said, how much less time did they volunteer?

Uh-oh! Brooks doesn’t say!

We don’t mean to single out Brooks. Columnists routinely engage in this practice; routinely, columnists will cite an allegedly fascinating study without telling us how large the observed distinction was. This may let them pimp a favorite theme without telling readers that the observed distinction was in fact quite small.

This morning, Brooks isn’t pushing a policy idea; he is just having some annual fun. But again and again, he fails to tell us how large the measured effect was.

This is his second example:
BROOKS: Men are dumber around women. Thijs Verwijmeren, Vera Rommeswinkel and Johan C. Karremans gave men cognitive tests after they had interacted with a woman via computer. In the study, published in the Journal of Experimental Social Psychology, the male cognitive performance declined after the interaction, or even after the men merely anticipated an interaction with a woman.
We’re told that “the male cognitive performance declined”—but we aren’t told how much it declined! (In our view, if performance declines for more than four hours, men should seek medical attention.)

In his third example, Brooks presents the kind of finding many columnists will present to drive a point about gender equity issues. By now, you can incomparably spot the problem:
BROOKS: Women inhibit their own performance. In a study published in Self and Identity, Shen Zhang, Toni Schmader and William M. Hall gave women a series of math tests. On some tests they signed their real name, on others they signed a fictitious name. The women scored better on the fictitious name tests, when their own reputation was not at risk.
Women scored better on the fictitious tests! But did they score better by much?

Brooks presents a dozen studies in which some distinction was observed. In none of these cases are we told how big the distinction was. We often tear our hair about this very familiar practice. We do so when columnists try to advance a policy preference by citing a study this way.

Brooks closes his column as follows. Slyly, the analysts smiled about the highlighted points:
BROOKS: It’s always worth emphasizing that no one study is dispositive. Many, many studies do not replicate. Still, these sorts of studies do remind us that we are influenced by a thousand breezes permeating the unconscious layers of our minds. They remind us of the power of social context. They’re also nice conversation starters. If you find this sort of thing interesting, you really should check out Kevin Lewis’s blog at National Affairs. He provides links to hundreds of academic studies a year, from which these selections have been drawn.
No one study is dispositive? In fact, few studies are useful at all unless we’re given a rough idea how big their measured effect was. In studies like these, size matters.

Then too, as Brooks later notes, these studies can be good conversation starters! Just a guess:

Women are more likely to warm to a man in a bar. If he has a good study to show her.

This correction is no damn good: Uh-oh! Just like that, Brooks has a correction to make:
BROOKS: An earlier version of this column misstated the findings of a study in the journal Economics Letters about corporate success. The authors found that C.E.O.’s were disproportionately less likely—not disproportionately likely—to have been born in June and July.
CEOs are less likely to have been born in June and July? Either way, it didn't much matter. You see, Brooks didn't tell us how much more or less likely these picked-apart CEOs are.


  1. Not to mention the logical fallacy of post hoc ergo propter hoc (whether committed by the studies themselves, it is certainly being committed by Brooks).

    Aside from the problem of idiotic sociological and psychological research (to be distinguished from the serious stuff) that the press loves to make much of, why is David Brooks the NYT's designated sociology/psychology pundit? He has no formal credentials in these areas and no demonstrated expertise or insight on these subjects.

  2. A great example of a study that was much smaller and whose conclusions were less determinative than many imagine is described here.

    1. I swore I wouldn't do this but here goes.

      July 2012 Popular Science had an article about climate scientists that dared to say climate change (global warming) was accelerated by human activity.

      They received hate mail, death threats, white powder in their mail, lawsuits, attacks by politicians, cancellation of grants, etc.

      Some quit research science altogether out of fear being unable to make a living in their chosen field.

      Climatologists all over the wold have learned to keep a low profile out of fear.

      Might that produce a "statistically significant" skewing of the polls?

  3. Quaker in a BasementDecember 11, 2012 at 8:05 PM

    Aw, give Brooks a break. At least this one time every year he cites social trends that have evidence backing them up. The rest of the year, he just draws sweeping conclusions from his own limited observations.

  4. We’re told that “the male cognitive performance declined”—but we aren’t told how much it declined! (In our view, if performance declines for more than four hours, men should seek medical attention.)
    At Last a actual regular old fashioned joke from Bob. Not sarcasm but just a good ole joke. I understand he does stand up comedy but have wondered where he hides his sense of humor in his blog other than sarcastic and sometime witty asides. A good joke or two sprinkled here and there in your comments only adds to the enjoyment and doesn't distract from whatever point you are making. Plus I like not having to pay a cover charge..More jokes please.

  5. At the very least Brooks should say something along the lines of "and the study's conclusion was that the difference observed was within the margin of error and statistically significant." Unless, of course, wasn't. ;)

    And yes, size does matter. Sample size, that is.

  6. Small differences can have cumulative effects. You cannot just look at the size of the difference involved but also have to think about what it might mean over time and in the contexts where an effect occurs.

    Further, influences on behavior work in conjunction with each other in life, even if they are studied independently. It would be silly scoff at a small effect without recognizing that it is one of many similar factors each influencing behavior in a particular direction, for example.

    David, if something is "within the margin of error," it will not be statistically significant. The term "statistically significant" implies that a finding is greater than what might occur by error (by chance variation). In these studies you should be checking for control groups. When a result occurs in one group but not in the control group, you can assume more about causality. The change in men's cognition might be regression to the mean unless you include a control group to compare against. Bob seems to think that you cannot demonstrate causality in these studies, but the whole point of doing a manipulation with a control group is to demonstrate causality. So, look for whether something was manipulated or whether the comparison involves just correlations (which show relationship but not causation).

    Most phenomena in psychology show very small effect sizes and account for tiny proportions of the variability in behavior, not because the phenomenon is unimportant but because behavior is very complex and has lots of different influences on it -- making it hard for any one phenomenon studied by researchers to account for all of the differences in behavior observed. So being a critical thinker about psychology can result in throwing out the baby with the bathwater.

  7. We are a bunch of volunteers and starting a new scheme in our community.
    Your site provided us with useful info to work on. You have performed an
    impressive job and our entire community might be grateful
    to you.

    Look at my blog ... Have you got moobs ?

  8. Appreciate this post. Let me try it out.

    Have a look at my web-site ... What To Expect Once You Had Gynecomastiasurgical Treatment?

  9. I could not refrain from commenting. Exceptionally well written!

    My web site - Abnormality in Adolescence and Old Age

  10. Very nice post. I just stumbled upon your blog and
    wished to say that I have truly enjoyed surfing around your blog posts.
    After all I'll be subscribing to your rss feed and I hope you write again soon!

    My website: Use of turmeric for gynecomastiatreatment

  11. I relish, result in I discovered exactly what I was taking a look for.
    You've ended my four day lengthy hunt! God Bless you man. Have a nice day. Bye

    Feel free to surf to my blog Newest Health-Related Advancements For " Moobs " Treatment

  12. Wonderful beat ! I wish to apprentice while you amend your site,
    how could i subscribe for a weblog web site? The account aided me a acceptable deal.

    I had been a little bit acquainted of this your broadcast provided vibrant clear idea

    Review my site :: disparus