We'll see him and raise him one: Kevin Drum makes an excellent point today about the way public school teachers routinely get discussed.
He starts by quoting this passage from a typical piece about teacher quality. The piece was published by the conservative-leaning American Enterprise Institute:
PETHOKOUKIS (6/13/16): Among the many studies cited: a University of Melbourne review of more than 65,000 papers on the effects of various classroom interventions. It concludes that what matters most is teacher expertise: “All of the 20 most powerful ways to improve school-time learning identified by the study depended on what a teacher did in the classroom.”"Duh," Drum repeatedly says. In what profession isn't it true that the top ten percent of practitioners are more effective than the bottom ten percent?
Another paper found that students taught by teachers in the top 10% for effectiveness learn 1.5 years’ worth of material in an academic year, three times as much as those taught by teachers in the bottom 10%....The big question, then, is to what extent good teaching can be taught. Are high-quality teachers born that way or can they be made?
This distinction in quality is constantly cited with respect to teachers, Drum correctly says. But you never hear this type of complaint about people in other professions.
The denigration of public school teachers is a mandated press corps script. And yes, it's true—several such scripts turn on the alleged difference in performance between the top X percent of teachers and the bottom X percent.
(A few years back, Nicholas Kristof was routinely citing one of these standard semi-complaints.)
Drum is making an excellent point—but the problem here may be worse than he thinks. Here's a potential additional problem with this type of comparison:
When you see teachers evaluated by test scores, researchers are generally working from student performance on statewide achievement tests. But uh-oh! Those are the tests on which we've seen a fair amount of outright cheating over the past forty years.
We're speaking here about outright cheating, not about "teaching to the test." Here's why the possible existence of outright cheating creates a problem:
When researchers work with data from statewide tests, they have no obvious way to know if some of the "most effective" teachers actually cheated to produce their students' high scores.
Presumably, some of these teachers did cheat—but researchers will have no obvious way to know which teachers did so. To the extent that cheating occurs, it will exaggerate the apparent difference between the "most effective" teachers and their slacker counterparts.
We're often struck by the way educational experts and education reporters ignore the existence of cheating on standardized tests. Back on May 3, this was one of the problems with Monica Rich's high-profile report in the New York Times, a report about performance patterns in the nation's many school systems.
Rich focused on the connection between student socioeconomic status and student achievement. We spent several weeks reviewing some of the larger problems with her analyses, which were massively botched.
We never got around to discussing the problem created by the presence of cheating. It seemed to raise its troubling head in these passages about school districts which produced surprisingly high test scores, given their students' relatively low SES:
RICH (5/3/16): The data was not uniformly grim. A few poor districts—like Bremen City, Ga., and Union City, N.J.—posted higher-than-average scores. This suggests that strong schools could help children from poor families succeed.Let's be clear. We know of no reason to think that test scores in these districts were caused by cheating, whether in whole or in part.
In one school district, Union City, N.J., students consistently performed about a third of a grade level above the national average on math and reading tests even though the median family income is just $37,000 and only 18 percent of parents have a bachelor's degree. About 95 percent of the students are Hispanic, and the vast majority qualify for free or reduced-price lunches.
Silvia Abbato, the district's superintendent, said she could not pinpoint any one action that had led to the better scores. She noted that the district uses federal funds to help pay for teachers to obtain graduate certifications as literacy specialists, and it sponsors biweekly parent nights with advice on homework help for children, nutrition and immigration status.
The district regularly revamps the curriculum and uses quick online tests to gauge where students need more help or teachers need to modify their approaches.
''It's not something you can do overnight,'' Ms. Abbato said. ''We have been taking incremental steps everywhere.''
That said, we also can't say that cheating wasn't a factor, whether in these districts or in a few other major outliers. (The much-maligned Steubenville, West Virginia leaps out from Rich's first graphic).
We only note that Rich completely skips this possibility. Within the past few years, the problem of outright cheating on standardized tests finally achieved recognition as a potential and actual major problem—except in the New York Times, where Rich proceeds as if she's never heard a word about it.
It's generally assumed that cheating doesn't occur on the National Assessment of Educational Progress (NAEP), the widely-praised "gold standard" of domestic public school testing. That said, most researchers work with data from statewide testing programs, and outright cheating on these tests has been a problem for decades.
Education reporters and "educational experts" persistently act like they've never heard this—and in some cases, they probably haven't! At the New York Times, education reporting tends to be Potemkin all the way down.
Tomorrow: The Times presents an improved report on the Cleveland, Mississippi public schools