Part 3—Your important new mantra: Are 17-year-old American students doing less well in math?
Having asked that, let’s also ask this:
Are 17-year-old Americans doing less well in math?
As some readers will already know, those are different questions. That said, let’s make a basic point:
The SAT is not designed to answer either question!
The SATs are not designed for that purpose! We’re going to offer that today as an important new mantra—a mantra you should repeat, in slavish fashion, if you want to understand one of the most ubiquitous cons in which your “press corps” engages.
“The SATs are not designed for that purpose! The SATs are not designed for that purpose!”
We’ll suggest you repeat it again and again, unless you enjoy being misled by the type of elite establishment script the Washington Post published last week.
Nick Anderson is an experienced education reporter. Basically, he was working a very familiar con in his lengthy, front-page news report in last Thursday’s Post.
The new SAT scores had been released—and no one could explain the small drop in average scores! At least, that’s the impression Anderson gave, conning you as he did.
This is the way the music man started. Are you reciting your mantra as you review his misleading work?
ANDERSON (9/3/15): Scores on the SAT have sunk to the lowest level since the college admission test was overhauled in 2005, adding to worries about student performance in the nation’s high schools.Before our series is finished, we’ll review some horrible recent work by the awful Petrilli. For the record, he’s president of the Fordham Institute, a conservative think tank—but then, who's keeping score?
The average score for the Class of 2015 was 1490 out of a maximum 2400, the College Board reported Thursday. That was down 7 points from the previous class’s mark and was the lowest composite score of the past decade. There were declines of at least 2 points on all three sections of the test—critical reading, math and writing.
The steady decline in SAT scores and generally stagnant results from high schools on federal tests and other measures reflect a troubling shortcoming of education-reform efforts. The test results show that gains in reading and math in elementary grades haven’t led to broad improvement in high schools, experts say. That means several hundred thousand teenagers, especially those who grew up poor, are leaving school every year unready for college.
“Why is education reform hitting a wall in high school?” asked Michael J. Petrilli, president of the Thomas B. Fordham Institute, a think tank. “You see this in all kinds of evidence. Kids don’t make a whole lot of gains once they’re in high school. It certainly should raise an alarm.”
It is difficult to pinpoint a reason for the decline in SAT scores, but educators cite a host of enduring challenges in the quest to lift high school achievement. Among them are poverty, language barriers, low levels of parental education and social ills that plague many urban neighborhoods.
Whatever! By the start of paragraph 5, Anderson had basically thrown up his hands concerning the reasons for “the steady decline in [average] SAT scores.” Unless it’s a lack of “reform” in our high schools, he just couldn’t think of a way to explain that steady decline!
“It is difficult to pinpoint a reason for the decline in SAT scores,” Anderson wrote, making a statement which is technically accurate but grossly misleading. Before we provide the background to your new mantra, let’s improve on Anderson’s statement, which was in essence a familiar establishment con.
Is it “difficult to pinpoint a reason for the decline in [average] SAT scores?” Actually, it’s impossible! In part, that’s because there is no single “reason” for the decline in those average scores. But it’s also impossible to “pinpoint a reason” because of this basic point:
The SATs are not designed for that purpose! The SATs are not designed for that purpose! The SATs are not designed for that purpose!
Let’s explain what we mean by that blindingly obvious statement, which you should keep repeating.
Basically, there are two ways to conduct educational tests if you want to be able to make sensible year-to-year comparisons. You can test all the kids in the population under review. Or you can test a “scientifically selected,” representative sample of all those students or children.
Each technique is in wide use. The SATs employ neither.
Universal testing: You can test all the kids in a given population. This is what the fifty states attempt to do in the annual statewide tests which were mandated by No Child Left Behind.
To cite one example, the state of Maryland attempted to test all its fourth graders in the spring of 2014. One year later, it attempted to test all its fourth graders in the spring of 2015.
No state ever manages to test all its students, but the percentages tested run very high. This permits certain types of reasonable year-to-year comparisons.
Representative samples: Other programs test “representative samples” of the populations in question. That includes the National Assessment of Educational Progress (NAEP), the so-called “gold standard” of domestic testing.
Every few years, the NAEP tests the nation’s fourth- and eighth-graders. But it doesn’t try to test all such kids. Instead, it tests carefully selected samples of same.
It’s much like election polling. The samples are selected to provide representative numbers of kids by race, ethnicity, family income and geographic region. Whether in election polling or educational testing, no sample can perfectly represent the full population from which it is drawn. But carefully drawn representative samples can also provide for sensible comparisons from one year to the next.
In running the SATs, the College Board doesn’t engage in either practice! The SATs don’t test all the nation’s high school seniors. Nor do the SATs attempt to test a representative sample of same.
The College Board simply gives its tests to any student who signs up and takes them! There’s nothing wrong with that practice, of course, until we start over-interpreting the results of such testing. But every year, the demographic blend of the tested students has been changing, in ways which can be fairly substantial. These means that the SAT program isn’t designed to produce meaningful year-to-year comparisons.
Presumably, Anderson understands this. As he continued his front-page news report, he quoted a College Board official and offered some sensible warnings about overinterpretation.
But alas! Wouldn’t you know it? He left one warning out:
ANDERSON: Schmeiser cautioned against “overinterpreting small fluctuations” in average scores from year to year.The SAT doesn’t test a representative sample of high school seniors. According to Anderson, “that makes comparisons of scores among schools, school districts or states problematic.”
Caveats abound when SAT scores are released. The students who take it are in most cases a self-selected sample, motivated to endure a grueling exercise of 3 hours and 45 minutes on a Saturday. (The test is offered during school days in all public high schools in the District of Columbia and a handful of states.)
Some students take the SAT two or three times. Scores also track closely with family income, rising with affluence, so annual variations in who takes it can swing the results. That makes comparisons of scores among schools, school districts or states problematic. The lower the participation, generally, the higher the scores.
That statement is perfectly accurate. But the lack of representative samples also makes year-to-year comparisons of SAT scores “problematic,” as Schmeiser may have said.
But so what? Starting right in paragraph one, Anderson built his entire front-page report around just such a comparison! Needless to say, it led to a standard conclusion about the need for “reform.”
Is it “difficult to pinpoint a reason for the decline in [average] SAT scores?” Actually, no—it’s impossible! It’s impossible to pinpoint the reasons for the decline in overall average scores—or for the rise in many scores on both the SAT and the ACT once you “disaggregate” scores, once you break the scores down by demographic groups.
Alas! Due to our brutal racial history—due to ongoing social policy—white kids and Asian-American kids score much higher on the SATs, on average, than black and Hispanic kids do. And duh! If the lower-scoring groups constitute a higher percentage of students tested as the years go by, this will tend to lower the average scores!
Anderson understands that, of course. Everyone understands that! But it seems he didn’t want to explain it—and this unhelpful, brain-dead reluctance is quite widespread in our “mainstream press corps.”
Tomorrow, we’ll look at where it can lead when big newspapers play us this way. In the meantime, remember two basic points:
At the Washington Post, all roads lead to decline in our public schools, probably due to the lack of “reform.” By rule of Hard Pundit Law, this official establishment narrative must always be repeated, no matter how large a con you must pull to reach that desired conclusion.
Please remember that basic point! As we’ve shown you through the years, that narrative rules all mainstream coverage of educational testing, international and domestic.
Results of educational testing must always lead to that conclusion! Something has to be terribly wrong in our schools, presumably due to our ratty teachers and the lack of “reform!”
Please remember that basic point, which controls all coverage of testing. But while you’re at it, please remember your new mantra too:
The SATs are not designed for that purpose! The SATs are not designed for that purpose!
Presumably, Anderson understands that. Perhaps it was his corporate editor who made him take it out!
Tomorrow: Onward and downward to the horrors at Slate. Beyond that, what did the AP write? What occurred at the Times?