Ripples: Concerning that very rough rule of thumb!


Where that rough rule of thumb comes from: Yesterday, we received an e-mail concerning the origin of that very rough rule of thumb—the rough rule of thumb which says that ten points on the NAEP scale is roughly equal to one academic year.

At one point, our mailer linked to this academic paper. On page 5, the authors describe a 10-11 point rule of thumb, and give a sense of its origin:
LUBIENSKI AND LUBIENSKI (2006): NAEP mathematics results are reported on a 0-500 scale, with the 2003 mean being 235 at grade 4 and 278 at grade 8. The NAEP mathematics scale was originally designed to allow cross-grade comparisons, indicating that the 43-point difference between grade 4 and grade 8 would mean that a gap of 10 or 11 points represents roughly one grade level. NAEP no longer maintains that the scale is consistent across grades. However the scales have not changed markedly, and therefore the idea that “10-11 points is approximately 1 grade level” is still often used as a helpful, albeit rough, guide for interpreting score differences.
That has always been our general understanding of where this rough rule comes from. It seems to us that there was once a pristine 40-point difference between NAEP cut-off scores for “proficiency” at the Grade 4 and Grade 8 levels. That was always our understanding of where the rule came from.

Still, our mailer wanted more. He started like this:
EMAILER: This email in-part pertains to your often quoted rule of thumb regarding interpreting score gains on the NAEP: that 10 points gives a very rough approximation of one academic year. I've been looking around for other instances of this rule of thumb being used and have not been able to find very many. I'm not presenting this as a criticism, rather I'm looking for some means of defending the use of this rule of thumb should I ever be asked about it while discussing student score gains on the NAEP...
Where can he find this rule of thumb being used? Offhand, we can’t give specific citations, and it isn’t easy to search for. But education reporters and education “experts” will often apply this ten-point rule when they discuss achievement gaps on the NAEP. They won’t specifically declare that they are applying a ten-point rule. They will simply (for example) report a 20-point achievement gap, then say it corresponds to roughly two years of schooling.

You can see a similar performance in Amanda Ripley’s new book, this time with respect to the PISA. At several points in the book, Ripley converts score differences on the PISA into a rough number of academic years. In this example, she compares an 84-point gap in the PISA scale to two academic years:
RIPLEY (page 158): African-American students did poorly on PISA, heartbreakingly so. On average, they scored eighty-four points below white students in reading in 2009. It was as if the white kids had been going to school two extra years. The gap between white and African-American students showed itself in dozens of other ways too, from graduation rates to SAT scores. Generally speaking, up to half the gap could be explained by economics; black students were more likely to come from lower-income families with less-educated parents.
In her text, Ripley doesn’t state her basis for that conversion, in which 84 points is compared to two school years. In an endnote, she says this: “In general, thirty-nine score points in PISA is considered the equivalent of one year of formal schooling.” She doesn’t state her source for that rule of thumb.

In a similar way, we have often seen the ten-point rule applied in practice—always when discussing the gaps, never when discussing the large score gains which have been achieved on the NAEP.

Alert newspapers would have done the following long ago. First, they would have reported the score gains which have been achieved by different groups on the NAEP. This is virtually never done. Except in aggregate form, the score gains are essentially never reported.

Second, they would have interviewed NAEP officials, looking for ways to interpret the score gains. At the Grade 4 level, black kids gained 37 points in math between 1992 and 2013. How much academic gain might we associate with a score gain of that size?

Newspapers should have pursued that question long ago. In truth, there’s no sign that they ever will. Our big newspapers won’t even report these large score gains, let alone try to interpret them.

We live inside a badly broken intellectual culture. Our reporters know how to do one thing. They know how to memorize and repeat the scripts preferred by elites.

How much better are black kids doing? What might explain their apparent progress? The New York Times doesn’t care. Neither do the millionaires who entertain you night after night on The One True Channel.

They enjoy the antics of Rob Ford. Black kids can go straight to Hell—or to Toronto, whichever comes first. These are the actual values of our millionaire pseudo-press.

We know—they seem like lovely people. We’re simply telling you what they do—whose lives they stoop to discuss.


  1. Arne Duncan said, "...previous standards were too low, previous tests were too easy and American students are not prepared to compete in the global economy."

    Why should he know what he's talking about? He's only Secretary of Education.

    Read more:

    1. Dave, he might have the title, but he doesn't have the background.

    2. "What should he know what he's talking about? He's only the Secretary of Education."

      Michael D. Brown - first Undersecretary Of Emergency Preparedness and Response - formerly Judge and Stewards Commissioner for the International Arabian Horse Assn.

      What Diana says.

    3. Right. Duncan's 17 years of experience in school administration, including service as CEO of Chicago Public Schools is exactly as irrelevant to the job of Secretary of Education as Brown's experience with the horse association was to his new job as head of FEMA.

      Good grief, folks. Get a grip.

      Meanwhile, as Somerby continues to worry over the latest book or article written by a young woman posing as an education reporter, the controversy over Common Core rages on. Without his valuable insights to feed his tribe.

    4. From Wikipedia about Duncan's work history, after playing professional basketball until 1991:

      In 1992 childhood friend and investment banker John W. Rogers, Jr., appointed Duncan director of the Ariel Education Initiative, a program mentoring children at one of the city's worst-performing elementary schools and then assisting them as they proceeded further in the education system. After the school closed in 1996, Duncan and Rogers were instrumental in re-opening it as a charter school, Ariel Community Academy. In 1999, Duncan was appointed Deputy Chief of Staff for former Chicago Public Schools CEO Paul Vallas.

      Mayor Richard M. Daley appointed Duncan to serve as Chief Executive Officer of the Chicago Public Schools on June 26, 2001. Opinions vary on Duncan's success as CEO; one prominent publication notes improved test scores and describes Duncan as a consensus builder, while another finds the improvements largely a myth and is troubled by the closing of neighborhood schools and their replacement by charter schools, and what it describes as schools' militarization."

      He has no experience as a teacher. He comes straight out of the charter school movement and his appointments are all cronyism, including his latest appt by Obama. He has no background at all in educational theory and no training as an educator. I was unhappy with his appointment myself, but it was no surprise given Obama's campaign statements on education.

    5. @ David in Cal:

      Arne Duncan has said a lot of stupid things, so just because he says something now (about standards) does not make it true. Far from it.

      Duncan is an embarrassment. Which is why many educators are not enthralled in the least with Obama's education initiatives (Race to the Top? Please.).

    6. On what does he base this statement? Let's see the data.

  2. Comparing grade levels within a particular testing year yields an average difference quite close to the 10 points per year "rough rule." Indeed, it's not very rough. From year to year is a trickier matter, because it relies on the assumption that NAEP test designers are successful in making each year's test equal in difficulty. That's a big assumption no matter how hard they try to accomplish that. Frankly, a gain of 3.7 grade years (give or take a couple of percentage points) for any group in any subject, even over 20 years, is hard to imagine.

    1. That should say tenths of a percentage point.

  3. Urban legend -- I often disagree with you, but I think your comment here is right on!

    1. I'll try to be more civil with you next time -- at least once, anyway.

    2. Not me, urban legend. Does that make me a bad person?

      DAinCA, don't you have some explaining to do back in the commentary on the last TRIBE AND RACE entry? Or have you just given up on that evidence thing?