The upper-end press corps is highly unskilled!

FRIDAY, SEPTEMBER 27, 2013

This work is amazingly bad: The upper-end press corps is very unskilled. It’s hard to grasp how low the intellectual standards are within this group.

For an example, consider the way the Washington Post and the Atlantic have reported a new bunch of data.

We’ll start with the Washington Post. Yesterday morning, two education writers reported the new SAT scores for students in the DC region and inj the nation as a whole:
ST. GEORGE AND ANDERSON (9/26/13): SAT scores hit eight-year high in Va.; D.C. also sees gains

Virginia students received their highest scores ever on the modern SAT college admission test this year, and scores also rose in the District even as national averages remained unchanged. Maryland’s scores dropped for the third straight year, according to data for the Class of 2013 released Thursday.
If you know anything—anything at all—you know we’re in dangerous territory here. Still, those gains in average scores in Virginia and DC sounded pretty darn good.

Alas! The Post had started us out on burgundy. We soon hit the harder stuff:
ST. GEORGE AND ANDERSON: Nationally, the results for the Class of 2013 mirrored those for the preceding year’s class. Average scores in critical reading (496), math (514) and writing (488) were all unchanged. Each section of the exam is worth 800 points.

What’s more, the share of students who met or exceeded a benchmark that the College Board considers a key predictor of “college and career readiness”—a composite score of 1550—has been virtually unchanged for the past five years. The share now stands at 43 percent.

Considered another way, that means 57 percent of this year’s high school graduates who took the test did not meet the readiness benchmark.

“While some might see stagnant scores as no news, we at the College Board consider it a call to action,” David Coleman, the nonprofit organization’s president, said in a conference call with reporters. He said schools must expand access to rigorous course work for all students. “We are impatient with the state of progress.”
By the mandates of Hard Pundit Law, gloom is required in stories like this. On a national basis, the average score remained unchanged! This was soon described as “stagnation.”

Gack! As everyone knows, it’s dangerous to make comparisons from one year to the next with the SAT—or from one state to the next. The SAT is taken voluntarily—and almost every year, a larger portion of the student population chooses to take the test.

This tends to suggest that a less “elite” group of students is being tested each year. This makes it hard to compare average scores from one year to the next.

If you don’t know that, you don’t know anything about testing. At the Post, Donna St. George and Nick Anderson showed little sign of knowing that in their lengthy report.

Eventually, they issued a partial warning about this matter. But if readers blinked, they missed the warning, which was partial and heavily veiled:
ST. GEORGE AND ANDERSON: Michael J. Petrilli, an education analyst at the Thomas B. Fordham Institute, said college admission test scores should be read with caution because the test takers are not a representative sample from the nation’s high schools. But, he said, the unchanged national SAT scores dovetail with other national test data that show stagnant achievement in high school.

“You can say that at the 12th-grade level, the major trend, as has been the case for many years, is flat,” Petrilli said, adding that the trend contrasts with growth in earlier grades. “It’s one of the great questions in education policy today: Why have the gains at the lower level not translated into gains at the higher level?”
In that one sentence in paragraph 10, Petrilli was paraphrased giving a partial account of this well-known, obvious problem. By Hard Pundit Law, he immediately proceeded to a gloomy paraphrased rumination about the nation’s “stagnant achievement.”

Petrilli is very bright. We have no record of his full remarks to the reporters on this subject. But St. George and Anderson, and their editor, ought to be removed from this beat for making this presentation, which thoroughly failed to inform Post readers about the interpretive dangers here.

Truly, that was gruesome reporting. Over at the Atlantic, Julia Ryan was worse.

Ryan seems thoroughly clueless about the interpretive problems. How do people of this caliber get jobs in the upper-end press?
RYAN (9/26/13): This Year's SAT Scores Are Out, and They're Grim

Of the 1.66 million high school students in the class of 2013 who took the SAT, only 43 percent were academically prepared for college-level work, according to this year’s SAT Report on College & Career Readiness. For the fifth year in a row, fewer than half of SAT-takers received scores that qualified them as “college-ready.”

The College Board considers a score of 1550 to be the “College and Career Readiness Benchmark.” Students who meet the benchmark are more likely to enroll in a four-year college, more likely to earn a GPA of a B- or higher their freshman year, and more likely to complete their degree.

“While some might see stagnant scores as no news, the College Board considers them a call to action. These scores can and must change—and the College Board feels a sense of responsibility to help make that happen,” the report said.
Ryan’s report includes two graphics. The first is very hard to interpret. As if by Hard (Elite) Pundit Law, she adopted a gloomy tone throughout, from that “grim” headline on down.

Ryan never said a world about the problem with making year-to-year comparisons. She makes such comparisons all through her piece without discussing the dangers.

It’s very hard to make year to year comparisons with the SAT. If you don’t know that, you don’t know anything about testing.

Julia Ryan doesn’t know that! Just so you’ll know, “JULIA RYAN writes for and produces The Atlantic’s Education Channel.”

Rubes know, the high elites don't: The very first commenter to Ryan’s piece understood what we’ve just told you:
COMMENTER: I wonder how much of this decline in quality is driven by increasing the size of the testing pool. In other words, back in the olden days, only the top 20% of students reliably took the SAT and went to college, whereas now the top 60% of students take the SAT, so the scores would be expected to go down as the pool size increases. What would be interesting would be to see how the numbers have changed for the top 20% over time, since that would be more indicative of how educational quality is changing. (Note that the numbers are for illustration only.)
At the Atlantic, readers understand this stuff. The person who produces the Education Channel doesn’t seem to.

One final question. Why does the College Board (the SAT) release a report of this type?

The College Board’s official report says nothing about the interpretive dangers, even as it notes the expansion in the size and makeup of the student pool being tested each year. A competent press corps would ask the hapless David Coleman why he would do such a thing.

22 comments:

  1. Given the exemplars we have as high profile education reporters, it wouldn't occur to them to ask such a question because they don't seem to understand what they are seeing.

    As for David Coleman, given his position, he can't try to clear the air of all the nonsense floating around about education and test scores. He fits right in with the rest of the "reformers" saying such awful things about the state of education in the US. He doesn't have to say bad things, he just has to let the bad go unchallenged.

    ReplyDelete
  2. The College Board is busy promoting its testing programs and trying to get more high schools to offer AP classes (so they will take the AP tests). You'd think they were profit-making.

    My experience as a reader for the AP exam was that they didn't want to hire or pay sufficiently expert readers so they relied on high school teachers, placed in a summer-camp type environment, who were given a rubric of correct answers (developed by a group of high school teachers). In some years, the rubrics contained incorrect information and students giving correct essay answers were not given credit or were grade incorrect for giving answers that were correct but beyond the expertise of the high school teachers grading the exam. Those teachers typically had a BA or BS in the AP subject but no graduate level training in it, so they had no depth to evaluate sophisticated answers and were required to stick to the rubric. This was done to limit the judgment of the graders and for cost control reasons -- the only goal was to grade as many essays as quickly as possible. I think it was unfair to the top students, but they probably got enough correct answers on the multiple choice to offset these problems. My point is that the AP was run along the same lines as a profit-making institution whose primary focus is on processing as many units as possible at the cheapest cost. It isn't the way college level grading is done and does not prioritize accuracy or expertise in assessing mass numbers of students. That is what the SAT is all about too. So it is no surprise to see the same values and attitude reflected by Coleman.

    ReplyDelete
    Replies
    1. Yep, exactly. Advanced Placement courses are more myth than reality. As researchers note, once the demographic characteristics of students are controlled for, the claims made for AP disappear.

      And the SAT? It's largely a sham.

      Delete
  3. Somehow my earlier comment disappeared so I'll try again.

    Using the exemplars of high profile education reporters cited here, they don't know enough to ask any real questions about the SAT scores.

    As for David Coleman, he knows that trying to clear the air of nonsense about test scores would hurt his standing in the world of public education derision as dominated by gates, koch, and others. So he is not about to try to set things straight.

    ReplyDelete
    Replies
    1. The second try was the charm. Adding the fact that he is trying to please gates and koch made me see what a truly despicable character this Coleman is. A very butleresque toady that Coleman.

      As for your first comment disappearing, I think the "Go Away" Anonymous must have finally figured out how to make the damn thing work after all these many tries.
      Persistence has its rewards. But so do KAPLAN SAT Prep courses! TDH readers planning on entry exams should take note. TDH readers who are already also SAT exam readers need not.

      KZ

      Delete
  4. The benchmark score for success in college is 1550? Really? To make things a little too simple, that's 775 per test. That's at the 99th percentile for both.

    ReplyDelete
    Replies
    1. There are three parts to the SAT now. You're a few years behind the times.

      Delete
    2. what is time to a rat already in eternity anyway?

      Delete
    3. For sure, but an eternity? 1550/3 = 516.+ = right at 50%.

      Do you still mark the answers with a quill?

      Delete
    4. When I self identify with a rodent it is the porcupine. Therefore my quills are used for defensive purposes only.

      Delete
  5. I found it hard to find comparative annual test participant rates impossible to find on the College Board web site, and couldn't find in any other source in a Google search.

    Not only would scores automatically go down with a larger segment of the population testing, but minority participation increased from 40% in 2012 to 46% in 2013. Given the difficulties minority communities traditionally face in the education system, with historically lower average scores than whites, that would certainly pull the overall score down even more.

    Can anyone say "disaggregation"? Apparently, Mr Coleman -- for that matter the entire College Board community responsible for its reports -- either has no clue what that means and its significance, or they are effectively under orders not to do anything that might compromise or undermine the "reform" agenda. The way the scores are reported, with heavy emphasis on a low percentage meeting a standard the College Board calls college "readiness" and with virtually no raw data that would permit even the most elementary analysis certainly is consistent with the latter.

    ReplyDelete
  6. I found it hard to find comparative annual test participant rates impossible to find on the College Board web site, and couldn't find in any other source in a Google search.

    Not only would scores automatically go down with a larger segment of the population testing, but minority participation increased from 40% in 2012 to 46% in 2013. Given the difficulties minority communities traditionally face in the education system, with historically lower average scores than whites, that would certainly pull the overall score down even more.

    Can anyone say "disaggregation"? Apparently, Mr Coleman -- for that matter the entire College Board community responsible for its reports -- either has no clue what that means and its significance, or they are effectively under orders not to do anything that might compromise or undermine the "reform" agenda. The way the scores are reported, with heavy emphasis on a low percentage meeting a standard the College Board calls college "readiness" and with virtually no raw data that would permit even the most elementary analysis certainly is consistent with the latter.

    ReplyDelete
    Replies
    1. Here's a place to start.

      http://nces.ed.gov/fastfacts/display.asp?id=171

      Delete
    2. Here's what the feds tell us at your link:

      "The SAT (formerly known as the Scholastic Assessment Test and the Scholastic Aptitude Test) is not designed as an indicator of student achievement, but rather as an aid for predicting how well students will do in college."

      So, the SAT (an acronym that now stands for nothing) has nothing to do with student achievement.

      What the feds do NOT tell us (neither does the College Board) is that the SAT doesn't predict much in the way of success in college. It's a test that has extremely limited predictive power. College enrollment specialists find that it predicts between about 3 and 14 percent of the variance in freshman-year college grades (and after that zilch). As one college enrollment specialist quipped, "I might as well measure their shoe size." 

      The thing that the SAT measures best is family income. Colleges use SAT scores for two purposes: to make themselves "look good," and to leverage financial aid. As Matthew Quirk noted in The Atlantic, "schools make thousands of decisions based largely on [SAT] test scores...That students are rejected on the basis of income is one of the most closely held secrets in admissions." 

      Those decisions almost always favor upper-income students.



      Delete
  7. Part 1

    The piece by Julia Ryan was god-awful. It did nothing to educate readers about the SAT.

    And that's a real shame.

    The National Center for Education Statistics tell us this about the SAT:

    "The SAT (formerly known as the Scholastic Assessment Test and the Scholastic Aptitude Test) is not designed as an indicator of student achievement, but rather as an aid for predicting how well students will do in college."

    The SAT (an acronym that now stands for absolutely nothing) is a test that is NOT tied to the high school curriculum. So it doesn't measure "achievement."

    It's a test that has extremely limited predictive power. It is –– to paraphrase the National Center for Education Statistics –– a very poor predictor of "how well students will do in college." College enrollment specialists find that it predicts between about 3 and 14 percent of the variance in freshman-year college grades (and after that zilch). As one college enrollment specialist quipped, "I might as well measure their shoe size."

    Julia Ryan told readers nothing of the kind, however. Instead, she wrote this snarky (and demonstrably false) sentence: "For the fifth year in a row, fewer than half of SAT-takers received scores that qualified them as 'college-ready'.”

    Princeton Review does a lot of test prep work. Here's what Princeton Review founder John Katzman said about the SAT:

    “The SAT is a scam...It has never measured anything. And it continues to measure nothing...does it measure intelligence? No. Does it predict college grades? No. Does it tell you how much you learned in high school? No. Does it predict life happiness or life success in any measure? No. It's measuring nothing.”

    ReplyDelete
  8. Part 2

    Author Nicholas Lemann –– whose book The Big Test is all about the SAT –– said this about the SAT’s severe limitations:

    “The test has been, you know, fetishized. This whole culture and frenzy and mythology has been built around SATs. Tests, in general, SATs, in particular, and everybody seems to believe that it's a measure of how smart you are or your innate worth or something. I mean, the level of obsession over these tests is way out of proportion to what they actually measure.”

    The thing that the SAT measures best is family income. Colleges use SAT scores for two purposes: to make themselves "look good," and to leverage financial aid. As Matthew Quirk noted in The Atlantic, "schools make thousands of decisions based largely on [SAT] test scores...That students are rejected on the basis of income is one of the most closely held secrets in admissions." [note to Julia Ryan: READ this piece! http://www.theatlantic.com/magazine/archive/2005/11/the-best-class-money-can-buy/304307/?single_page=true ]

    The College Board is happy to help, selling student profiles, and software and "consulting" services "used to set crude wealth and test-score cutoffs." Students from upper-income families win, and students from lower-income families get shafted.

    The College Board routinely coughs up “research studies” to show that their test products are valid and reliable. The problem is that independent, peer-reviewed research doesn’t back them up. The SAT and PSAT are shams. Colleges often use PSAT scores as a basis for sending solicitation letters to prospective students. However, as a former admissions officer noted, “The overwhelming majority of students receiving these mailings will not be admitted in the end.” Some say that the College Board, in essence, has turned the admissions process “into a profit-making opportunity.”

    Perhaps even more perverse, the College Board, which produces the PSAT, SAT, and Advanced Placement courses and tests, now recommends that schools “implement grade-weighting policies...starting as early as the sixth grade.” The SIXTH grade! If that sounds rather stupid, perhaps even fraudulent, that’s because it is.

    Education reporters at The Post, The Atlantic , and elsewhere would do well to heed the research and to stop perpetuating the myths about the College Board and its products. If the mainstream press is going to tell the story of American public education, then it should tell it honestly and accurately.

    ReplyDelete
  9. You are right. It is perverse. All this measurement business is a reflection of male dominance over the course of human history, and began when one paranoid fellow noticed his appendage was larger than his cave's cohorts in the morning and began to crow about it.

    KZ

    ReplyDelete
    Replies
    1. My comment was, of course, in response to the excellent
      two part commentary by Anon@ 7:56. My error in placement gives me the opportunity to add that, in my opinion, the mainstream press is a product of the scam system in part created and perpetuated by the SAT, which Anon so throughly debunked. The hard drinking, self educated fact finding reporter of yore has been replaced with elitist, clueless, score-inflated graduates of J schools
      who lack the other academic credentials otherwise attached by BOB to accurate coverage of education. Therefore the mainstream press should simply stop telling the story of American education because it has an inherent conflict of interest.

      KZ

      Delete
    2. Still scratching my head at your inability to put "KZ" in the "name" field instead of posting as anonymous and putting KZ at the bottom like a moron.

      Delete
    3. When you scratch your head do you find nits to pick, or, like BOB, do your peruse the NY Times looking for them there instead?

      KZ

      Delete
    4. If only everyone does as our fearless field marshall of the commentariat, Marcus, says to do, the world will be a much better place.

      Anonymous

      Delete
  10. "The upper-end press corps is highly unskilled!"

    Here's Seymour Hersh's solution:

    "I'll tell you the solution, get rid of 90% of the editors that now exist and start promoting editors that you can't control," he says. I saw it in the New York Times, I see people who get promoted are the ones on the desk who are more amenable to the publisher and what the senior editors want and the trouble makers don't get promoted. Start promoting better people who look you in the eye and say 'I don't care what you say'

    ReplyDelete