This work is amazingly bad: The upper-end press corps is very unskilled. It’s hard to grasp how low the intellectual standards are within this group.
For an example, consider the way the Washington Post and the Atlantic have reported a new bunch of data.
We’ll start with the Washington Post. Yesterday morning, two education writers reported the new SAT scores for students in the DC region and inj the nation as a whole:
ST. GEORGE AND ANDERSON (9/26/13): SAT scores hit eight-year high in Va.; D.C. also sees gainsIf you know anything—anything at all—you know we’re in dangerous territory here. Still, those gains in average scores in Virginia and DC sounded pretty darn good.
Virginia students received their highest scores ever on the modern SAT college admission test this year, and scores also rose in the District even as national averages remained unchanged. Maryland’s scores dropped for the third straight year, according to data for the Class of 2013 released Thursday.
Alas! The Post had started us out on burgundy. We soon hit the harder stuff:
ST. GEORGE AND ANDERSON: Nationally, the results for the Class of 2013 mirrored those for the preceding year’s class. Average scores in critical reading (496), math (514) and writing (488) were all unchanged. Each section of the exam is worth 800 points.By the mandates of Hard Pundit Law, gloom is required in stories like this. On a national basis, the average score remained unchanged! This was soon described as “stagnation.”
What’s more, the share of students who met or exceeded a benchmark that the College Board considers a key predictor of “college and career readiness”—a composite score of 1550—has been virtually unchanged for the past five years. The share now stands at 43 percent.
Considered another way, that means 57 percent of this year’s high school graduates who took the test did not meet the readiness benchmark.
“While some might see stagnant scores as no news, we at the College Board consider it a call to action,” David Coleman, the nonprofit organization’s president, said in a conference call with reporters. He said schools must expand access to rigorous course work for all students. “We are impatient with the state of progress.”
Gack! As everyone knows, it’s dangerous to make comparisons from one year to the next with the SAT—or from one state to the next. The SAT is taken voluntarily—and almost every year, a larger portion of the student population chooses to take the test.
This tends to suggest that a less “elite” group of students is being tested each year. This makes it hard to compare average scores from one year to the next.
If you don’t know that, you don’t know anything about testing. At the Post, Donna St. George and Nick Anderson showed little sign of knowing that in their lengthy report.
Eventually, they issued a partial warning about this matter. But if readers blinked, they missed the warning, which was partial and heavily veiled:
ST. GEORGE AND ANDERSON: Michael J. Petrilli, an education analyst at the Thomas B. Fordham Institute, said college admission test scores should be read with caution because the test takers are not a representative sample from the nation’s high schools. But, he said, the unchanged national SAT scores dovetail with other national test data that show stagnant achievement in high school.In that one sentence in paragraph 10, Petrilli was paraphrased giving a partial account of this well-known, obvious problem. By Hard Pundit Law, he immediately proceeded to a gloomy paraphrased rumination about the nation’s “stagnant achievement.”
“You can say that at the 12th-grade level, the major trend, as has been the case for many years, is flat,” Petrilli said, adding that the trend contrasts with growth in earlier grades. “It’s one of the great questions in education policy today: Why have the gains at the lower level not translated into gains at the higher level?”
Petrilli is very bright. We have no record of his full remarks to the reporters on this subject. But St. George and Anderson, and their editor, ought to be removed from this beat for making this presentation, which thoroughly failed to inform Post readers about the interpretive dangers here.
Truly, that was gruesome reporting. Over at the Atlantic, Julia Ryan was worse.
Ryan seems thoroughly clueless about the interpretive problems. How do people of this caliber get jobs in the upper-end press?
RYAN (9/26/13): This Year's SAT Scores Are Out, and They're GrimRyan’s report includes two graphics. The first is very hard to interpret. As if by Hard (Elite) Pundit Law, she adopted a gloomy tone throughout, from that “grim” headline on down.
Of the 1.66 million high school students in the class of 2013 who took the SAT, only 43 percent were academically prepared for college-level work, according to this year’s SAT Report on College & Career Readiness. For the fifth year in a row, fewer than half of SAT-takers received scores that qualified them as “college-ready.”
The College Board considers a score of 1550 to be the “College and Career Readiness Benchmark.” Students who meet the benchmark are more likely to enroll in a four-year college, more likely to earn a GPA of a B- or higher their freshman year, and more likely to complete their degree.
“While some might see stagnant scores as no news, the College Board considers them a call to action. These scores can and must change—and the College Board feels a sense of responsibility to help make that happen,” the report said.
Ryan never said a world about the problem with making year-to-year comparisons. She makes such comparisons all through her piece without discussing the dangers.
It’s very hard to make year to year comparisons with the SAT. If you don’t know that, you don’t know anything about testing.
Julia Ryan doesn’t know that! Just so you’ll know, “JULIA RYAN writes for and produces The Atlantic’s Education Channel.”
Rubes know, the high elites don't: The very first commenter to Ryan’s piece understood what we’ve just told you:
COMMENTER: I wonder how much of this decline in quality is driven by increasing the size of the testing pool. In other words, back in the olden days, only the top 20% of students reliably took the SAT and went to college, whereas now the top 60% of students take the SAT, so the scores would be expected to go down as the pool size increases. What would be interesting would be to see how the numbers have changed for the top 20% over time, since that would be more indicative of how educational quality is changing. (Note that the numbers are for illustration only.)At the Atlantic, readers understand this stuff. The person who produces the Education Channel doesn’t seem to.
One final question. Why does the College Board (the SAT) release a report of this type?
The College Board’s official report says nothing about the interpretive dangers, even as it notes the expansion in the size and makeup of the student pool being tested each year. A competent press corps would ask the hapless David Coleman why he would do such a thing.