THURSDAY, DECEMBER 22, 2022
A slightly odd research design: Readers, can we believe the things we're told by our most trusted tribunes? By the most trusted tribunes from our own blue tribe?
Sadly, we think the answer is no. In the case study we continue today, we offer one rather strange example.
At stake is a widely stated claim about the offensive and racist beliefs of "white medical students." In Tuesday's report, we showed you three instances in which some version of this claim has been stated in the Washington Post or in the New York Times.
In our experience, the claim in question is stated on a fairly regular basis. In Tuesday's report, one example came from the Washington Post's Michele Norris, a well-known former NPR anchor and also a good, decent person:
NORRIS (12/9/20): We are not just tussling with historical wrongs. A recent study of White medical students found that half believed that Black patients had a higher tolerance for pain and were more likely to prescribe inadequate medical treatment as a result.
Norris is a good, decent person with a long career in high-end mainstream journalism. As vaccine resistance grew within the black community, that was her account of the reason why many black Americans don't trust the medical establishment.
We're willing to admit it! When we read the highlighted claim, we didn't assume it was accurate. We didn't necessarily believe that half of a group of white medical students had said they believed that black patients have a higher tolerance for pain.
We didn't automatically believe it! We'd encountered too many bogus claims down through the years—bogus claims which pleasingly reinforced our blue tribe's preferred Storylines.
We decided to take a look at the recent study to which Norris referred. Today, we'll show you some of the things which struck us as strange about that widely cited study, the text of which you can peruse right here.
The 222 participants:
As you can see at that link, 222 medical students participated in the part of the study under review. According to the text of the study, their numbers broke down like this:
"first years, n = 63; second years, n = 72; third years, n = 59; residents, n = 28."
In short, participants included 194 people who were still in medical school, plus 28 (fourth year) residents.
All the respondents were "white." There was no attempt to evaluate the beliefs of any other group of medical students.
The 15 statements at issue:
In the part of the study under review, the 222 medical students (and residents) were asked to evaluate a set of fifteen statements. According to the authors of the study, eleven of the statements are false. Four of the statements are true.
These are the fifteen statements respondents were asked to assess:
1) Blacks age more slowly than whites
2) Blacks’ nerve endings are less sensitive than whites’
3) Black people's blood coagulates more quickly than whites'
4) Whites have larger brains than blacks
5) Whites are less susceptible to heart disease than blacks*
6) Blacks are less likely to contract spinal cord diseases*
7) Whites have a better sense of hearing compared with blacks
8) Blacks’ skin is thicker than whites’
9) Blacks have denser, stronger bones than whites*
10) Blacks have a more sensitive sense of smell than whites
11) Whites have a more efficient respiratory system than blacks
12) Black couples are significantly more fertile than white couples
13) Whites are less likely to have a stroke than blacks*
14) Blacks are better at detecting movement than whites
15) Blacks have stronger immune systems than whites
In the view of the study's authors, the four statements bearing asterisks are true. The other eleven are false.
The six permitted assessments:
Ther 222 medical students (including residents) were asked to assess each of those fifteen statements. They weren't asked to state their view of the various statements. Instead, they were given a list of six possible responses.
At this point, we begin wonder about the design of this study. The six responses available to the participants are listed here:
Participants were asked to assess each of the statements in one of those six ways. Perhaps there's something we don't understand about some aspect of survey design, but we note one point of puzzlement:
There is no apparent difference between two of those permitted responses! If you say that a statement is "possibly true," you're automatically saying that it's also "possibly untrue." It seems odd to us to offer six possible assessments, two of which seem to be essentially equivalent.
Maybe there's something we don't understand about this type of survey design. We'll admit that we wondered how the study would have turned out if respondents had instead been given these five choices:
I don't know
How would the study have turned out then? We have no way of knowing, though we could offer a guess.
The way those responses were scored:
We've shown you the fifteen statements respondents were asked to evaluate. We've shown you the six possible assessments they were allowed to make.
According to Norris, half the respondents (something like 111 out of 222) said they believed the second statement listed above—said they believed, in her paraphrase, that "Black patients had a higher tolerance for pain."
As we'll show you tomorrow, that statement by Norris was blatantly, grossly inaccurate. For today, we'll end by showing you the strangest part of this research design—an apparent part of this study's design which strikes us as truly remarkable.
Respondents were given six different ways to "score" each of the fifteen statements. Did respondents believe the various statements, eleven of which were false?
Astoundingly, this seems to be the way the authors of the research "scored" the respondents' assessments. Once again, we're offering text from the study itself:
We collected data from a total of 418 medical students and residents. Two hundred twenty-two met the same a priori criteria as in study 1 and completed the study (first years, n = 63; second years, n = 72; third years, n = 59; residents, n = 28)...On average, participants endorsed 11.55% (SD = 17.38) of the false beliefs. About 50% reported that at least one of the false belief items was possibly, probably, or definitely true.
Are we reading that correctly? That passage seems to suggest that, if a respondent rated a statement as "possibly true"—which means that it's also possibly false—the respondent was recorded as "believing" the statement.
That strikes us as a very strange procedure. This further except from the text of the study further suggests that this actually was the procedure:
For ease of interpretation and ease of presentation, we collapsed the scale and coded responses marked as possibly, probably, or definitely untrue as 0 and possibly, probably, or definitely true, as 1, resulting in percentages of individuals who endorsed each item.
According to that language, if a respondent said that some statement was "possibly true," that was taken to mean that the respondent had "endorsed" the statement in question.
That strikes us as a strange procedure—perhaps as "ease of interpretation" gone wild. Consider:
In the rapidly shrinking real world, if someone says a statement might be true, does that mean the person believes the statement? Does that mean that the person has somehow "endorsed" the statement?
That strikes us as a strange type of scoring on the part of the study's authors. In part, we say that for this reason:
According to the authors of the study, four of the fifteen statements in question actually are true. For example, this statement is said to true:
"Blacks have denser, stronger bones than whites."
Is that statement actually true? For ourselves, we don't have the slightest idea! Neither, we're willing to guess, did quite a few of the medical students and residents who took part in this study.
We'll also guess they had no idea about some of the other statements. That includes some of the eleven statements which are said to be false.
That said, the design of the study gave them no way to say they simply didn't know if these statements were true.
If they didn't know if a statement was true, they had to check one of the two (equivalent) assessments saying the statement was "possibly" true or untrue—and if they were unlucky enough to check the box marked "possibly true," they were apparently scored as believing / endorsing the statement in question.
To all appearances, this seems to be the way participants' responses were scored. This seems to mean that we have no real way to know how many of the respondents actually did believe the various untrue statements, including the untrue statement Norris cited in the Washington Post.
That said, we do know this:
We do know that Norris' statement in the Post was grossly, wildly inaccurate. In fact, nothing even dimly resembling half of the study's participants checked a response endorsing that statement in any conceivable way.
Michele Norris is a good, decent person and an experienced, high-ranking journalist. Given prevailing blue tribe Storyline, her claim about those white medical students was vastly pleasing.
Her claim was also grossly inaccurate. As you can see from some of the text we've posted above, it wasn't even an accurate statement of what the study's authors had said.
Norris' claim was grossly inaccurate. It remains uncorrected today.
Tomorrow: A look at the actual numbers