Part 4—Truly horrific work: Are seniors in American high schools doing less well in math?
In theory, it’s an important question. In practice, it’s clear that nobody cares. For various reasons, it can also be a tricky question—a difficult question to answer.
Are seniors doing less well in math? In theory, the question’s important. For that reason, we’ve spent several days trying to puzzle it out—more days than we had planned.
It’s also important to understand something else. It’s important to understand the way such questions get churned at our biggest establishment news orgs.
Routinely, our journalists turn to a stable of “educational experts” as they produce their scripted reports about the allegedly floundering public schools. All too often, these experts seem to serve as conduits of establishment narrative, sometimes in defiance of basic obvious facts.
Michael Petrilli is an establishment educational expert. For the record, he’s head of the Thomas B. Fordham Institute, “an ideologically conservative American nonprofit education policy think tank with offices in Washington, D.C., Columbus, Ohio, and Dayton, Ohio.”
(Unless you read the Washington Post, in which case the Fordham Institute is simply an “educational think tank.” Whatever!)
Are seniors in American high schools doing less well in math? On September 3, Petrilli explored this general question in a blog post bearing this title:
“Why is high school achievement flat?”
In our view, Petrilli’s blog post is horrific—a moral and/or intellectual disgrace. On the brighter side, it helps us see the way our “educational experts” routinely function inside the hall of mirrors we call our “public discourse.”
What makes that blog post such a mess? Yesterday, we started to answer that question. Today, let’s run through the basics.
In fairness to Petrilli, he seems to understand several things about the interpretation of test scores. In particular, he understands some of the problems people encounter when they try to interpret Grade 12 scores—when they try to evaluate progress, over time, at the Grade 12 level.
What does Petrilli understand? Consider three basic points:
The SATs weren’t designed for that purpose: Petrilli seems to understand a basic fact—the SATs were not designed for that purpose!
More specifically, the SATs were not designed to permit comparisons of America’s high school seniors over time. In fact, the SATs weren't designed to measure populations at all.
As we explained last week, the SATs were designed to measure the achievement of individuals. The program doesn’t make any attempt to test representative samples of the Grade 12 student population—not this year, not last year, not ten years ago.
Petrilli almost seems to understand this fact. As we noted yesterday, his blog post starts like this:
PETRILLI (9/3/15): The latest SAT scores are out today, and as I remarked to Nick Anderson at the Washington Post, education reform appears to be hitting a wall in high school.Question: What kind of expert reasons that way, in whatever field?
In truth, we already knew this. The SATs aren’t even the best gauge—not all students take them, and those who do are hardly representative.
Petrilli understands that the students who take the SATs are “hardly representative” of the Grade 12 student population as a whole. On this basis, he makes a weird statement:
The SATs aren’t the best gauge of that population, our expert weirdly says.
Good God! Because they’re “hardly representative,” the tested students can’t safely be used as a gauge at all! Beyond that, Petrilli surely knows that the demographic blend of the tested students has been changing every year, in ways which tend to lower average scores and doom attempts at comparisons over time.
He knows that, but he doesn’t say so. What kind of “expert” does this?
A statistical complexity involving Grade 12 NAEP: Petrilli is also aware of a statistical complexity involving Grade 12 scores on the NAEP. As we noted yesterday, he explains this statistical problem in the passage shown below.
Never mind what he’s explaining here. His basic point is clear:
PETRILLI: One explanation could be America’s rising graduation rate. Students who would have previously dropped out are now staying in school and remaining in the NAEP sample, thereby dragging down the scores. That sounds plausible to me...Below, we’ll look at the fuller passage, which we regard as horrific. That said, Petrilli seems to understand a possible statistical complexity which affects the utility of Grade 12 score comparisons over time. To wit:
As our national drop-out rate declines, lower-achieving students who once dropped out are staying in school through Grade 12.
Educationally, that’s a positive trend. But over time, the lower drop-out rate probably tends to “drag down [average NAEP] scores.”
“That sounds plausible to me,” Petrilli says. Below, we’ll marvel at what he says next.
Simpson’s Paradox: Petrilli is even aware of the role played by “Simpson’s Paradox.” This affects analysis of test scores at all grade levels, not just in Grade 12:
PETRILLI: Or maybe it’s Simpson’s Paradox at work. That would suggest that all racial groups are doing better, but because lower-scoring Latinos are replacing whites over time, our overall scores are declining.Simpson’s Paradox refers to a counterintuitive state of affairs. Within a given population, every group can improve its average performance over time—but the overall average performance may remain unchanged, or even go down.
In the realm of test scores, this will happen if lower-scoring groups constitute a larger portion of the overall group over time, as has been the case with American public school testing.
This helps explain why average SAT scores have dropped in recent years. Petrilli understands this obvious fact, but he didn’t mention it in his blog post.
Petrilli understands these things! At the same time, he’s talking about a nation whose Grade 12 NAEP scores have been on the rise in the most recent period available for review.
According to the NAEP, high school seniors have been doing better in math! As we’ve shown you in the past two days, these are the actual score gains:
Gains in average scores, 2005-2013Judged by normal measures, those score gains in math are substantial. And uh-oh:
Main NAEP, Grade 12 math
National public schools
White students: 4.32 points
Black students: 5.24 points
Hispanic students: 7.67 points
Asian-American students: 11.08 points
American Indian/Native Alaskan students: 9.48 points
As we’ll show you tomorrow, those score gains are actually larger than the score gains in Grade 4 over the same eight-year period, the most recent period for which we have NAEP data.
(The largest score gains occurred in Grade 8. All data tomorrow.)
The Grade 12 score gains are actually larger than the gains in Grade 4! And yet, Petrilli writes the following, under a headline which declares that high school achievement is flat:
PETRILLI: ...NAEP shows respectable gains for younger students, especially in fourth grade and particularly in math. Yet these early gains seem to evaporate as kids get older.How did our educational expert come up with that highlighted claim? In his blog post, he made two unfortunate plays, each of which helped him reach the gloomy conclusion which is currently “hot.”
First, Petrilli jumped to a different study! In discussing NAEP scores from the fourth and twelfth grades, he has been talking about the so-called “Main NAEP” study, which tests students in Grades 4, 8 and 12.
For the record, it’s clear that that is the study Petrilli has been discussing. Early on, he offers this gloomy claim:
“Twelfth-grade NAEP: Flat.”
As we noted yesterday, Petrilli doesn’t link to any NAEP scores when he offers this assessment. Instead, he links to a news report by a “staff writer” for the Christian Science Monitor—a largely bungled report about the release of score from the 2013 “Main NAEP.”
Plainly, Petrilli has been discussing scores from the Main NAEP. But uh-oh! Grade 12 math scores on the Main NAEP actually haven’t been flat.
In math, the Grade 12 score gains seem substantial; they’re larger than the gains at Grade 4! That said, how did Petrilli support the claim we’ve posted above?
By switching to a different study! This is his fuller passage:
PETRILLI: ...NAEP shows respectable gains for younger students, especially in fourth grade and particularly in math. Yet these early gains seem to evaporate as kids get older.In that passage, Petrilli switches—without saying so—from the “Main NAEP” to a different NAEP study, the so-called “Long-Term Trend Assessment.”
Here’s what that looks like using data from the long-term trend NAEP for three recent student cohorts. Progress at ages nine and thirteen hasn’t translated into progress at age seventeen.
The Long-Term Trend Assessment tests 9-year-old students, 13-year-old students and 17-year-old students, without regard to what grade they’re in. (Warning! Some 17-year-old students will be sophomores or juniors.) It uses a different math test than the one employed in the “Main NAEP.”
The Long-Term Trend Assessment is a different, parallel study. That doesn’t mean that it can’t be consulted in a wide array of ways. But it doesn’t specifically test high school seniors, as the Main NAEP specifically does. And for today, we’ll ask you to notice this:
When Petrilli switches to the Long-Term Trend Assessment, he looks at changes in scores over an 18-year period. He looks at score gains from 1994 through 2012, the last year for which data are available.
It’s true! Over that 18-year span, score gains were substantially larger among the two sets of younger students than among the 17-year-old students. Here’s the obvious problem:
Eighteen years is a fairly long time! To what extent might changing drop-out rates have “dragged down” average scores among 17-year-old students during that lengthy period?
For ourselves, we have no idea—and Petrilli doesn’t seem to care! Below, you see the fuller passage in which he describes the possible effect on average scores of changing drop-out rates.
We’ll ask one question at this time. What kind of “expert,” in any field, would ever reason like this:
PETRILLI: One explanation could be America’s rising graduation rate. Students who would have previously dropped out are now staying in school and remaining in the NAEP sample, thereby dragging down the scores. That sounds plausible to me, but to my knowledge, nobody has proved it empirically. Budding education policy scholars out there: Who is game to tackle that methodological challenge?What kind of “expert,” in any field, would ever reason like that? Here’s what Petrilli does, and fails to do, in that ridiculous passage:
Did our declining drop-out rate “drag down” average scores at the 17-year-old level over that 18-year time span? To Petrilli, that “sounds plausible.”
Having said that, he also says this—to his knowledge, no one has proven that this occurred.
Of course, this seems to mean that no one has proven that it didn’t occur! In other words, Michael Petrilli doesn’t know if the smaller gains among the 17-year-old students, over that lengthy time span, resulted from that “plausible” cause.
Petrilli doesn’t know if that happened; we can’t tell you either. That said, we didn’t run to the Washington Post and use those cherry-picked data to say, in Best Approved Elite Reform Fashion, that progress is stagnant at the Grade 12 level—even as the more recent math scores from the Main NAEP seem to show substantial progress occurring in Grade 12.
Petrilli’s blog post is horrific. How absurd does it get?
At one point, its author—an educational expert—says the SAT is no good for the task at hand. So he tells us to consider the ACT instead!
What kind of “expert,” in any field, would ever produce such ludicrous work? We’ll try to answer that question tomorrow.
We’ll also show you the most recent data from all three grades—Grades 4, 8 and 12—tested in the Main NAEP. The Grade 12 score gains seem substantial—and they’re larger than those at Grade 4!
How would our “expert” explain such a thing? Would he just keep churning script?
Tomorrow: A deeply important disclosure