(This is the second of a series of posts criticizing how the National Violence Against Women Survey (NVAW) measured rape prevalence. This post is the first of two examining how NVAW measures attempted rape, as opposed to completed rape. This post will criticize the methodology of the question used to measure attempted rape; the next post, which I think will be more interesting, will look at underlying questions of how we define “rape” and “attempted rape,” and what that means for a survey attempting to measure attempted rape prevalence. Future posts in this series will look at ways that NVAW is both underestimating and overestimating rape prevalence, and at issues with how NVAW measures the prevalence of rape against men.)
(WARNING: This post and the subject it discusses may be triggering. This post, and other posts in this series, discuss rape prevalence studies in an “academic” tone, similar to the tone used by academic studies of rape prevalence. Understandably, some folks may not want to read that. The post itself is below the fold.)
On his own blog, Sailorman criticizes the National Violence Against Women Survey’s (NVAW) ((NVAW’s survey, and how it measures rape prevalence, is described in four reports written by Patricia Tjaden and Nancy Thoennes, and published by the US government through the National Institute of Justice: Prevalence, Incidence and Consequences of Violence Against Women, Extent, Nature and Consequences of Intimate Partner Violence, Full Report of the Prevalence, Incidence, and Consequences of Violence Against Women, and Extent, Nature and Consequences of Rape Victimization. A fifth report, “National Violence Against Women Survey Methodology Report,” has not been published but was kindly provided to me in draft form by Ms. Tjaden.)) measurement of “attempted rape” on methodological grounds. In particular, Sailorman says, “The wording of the questions was not consistent. In particular, the wording of the question relating to attempted rape was quite different (crucially so IMO) from the wording of the questions relating to actual rape.”
What’s Sailorman talking about? Well, here’s a typical example of one of the four questions NVAW asked regarding completed rape:
Has anyone, male or female, ever made you have oral sex by using force or threat of force? Just so there is no mistake, by oral sex we mean that a man or boy put his penis in your mouth or someone, male or female, penetrated your vagina or anus with their mouth.
In contrast, this is the one question NVAW asks about attempted rape:
Has anyone, male or female, ever attempted to make you have vaginal, oral, or anal sex against your will, but intercourse or penetration did not occur?
Sailorman then argues:
…All of the questions relating to rape are substantially the same…. All include the word ‘force.’ All include the word ‘threat.’ […] If you think the missing language is relevant (and it’s hard to argue it’s not, given the language of the first four questions) then it is asking about attempts to get someone to have sex that do not necessarily include force, threats, or harm.
I agree with Sailorman that the NVAW’s approach to measuring attempted rape is weak, which is why when I’ve used NVAW as a source in the past, I’ve cited only their number for completed rape (according to NVAW, 14.8% of US women have been victims of a completed rape in their lifetime), rather than their number combining attempted and completed rapes (17.6%).
So why is the wording so different? Probably because the attempted rape question was written by different authors. The four questions about completed rape were taken word-for-word from a 1992 survey conducted by the National Victim Center (NVC). ((See National Victim Center and the Crime Victims Research and Treatment Center (1992), “Rape in America: A Report to the Nation,” Arlington, VA. This is not available online, as far as I know, but print copies can be ordered here.)) This was done, I assume, to allow NVAW’s and NVC’s survey results to be as comparable as possible.
However, the NVC survey didn’t survey respondents about attempted rape. Since the NVAW designers wanted to measure attempted and completed rape — perhaps for the sake of comparability to the best-known federal crime survey, the National Crime Victimization Survey (NCVS) ((NVAW, NCVS, NVC…. how many studies of rape prevalence including the letters “N” and “V” are there, dammit!)) — they added a fifth question relating to attempted rape. But the fifth question doesn’t seem to be as well thought out as the core questions are.
Why NVAW Probably Doesn’t Overestimate Attempted Rape Prevalence
Sailorman seems to think that the wording of the attempted rape question will tend to lead to false positives (that is, people who haven’t experienced attempted rape answering “yes” to the attempted rape question). I think he’s mistaken about that; the wording, combined with the fact that there’s only one question about attempted rape, will tend to lead to false negatives (that is, people who have experienced attempted rape not reporting it when surveyed).
The first flaw in the wording, as Sailorman describes, is that for the sake of internal consistency the wording of the question about attempted rape should have been as close as possible to the wording of the four survey questions about completed rape.
Sailorman says that the attempted rape question is too broadly worded, and will thus lead to false positives. His concern is reasonable on its face. However, in practice broadly-worded questions on rape prevalence surveys don’t lead to a significant number of false positives. For example, Mary Koss’ famous rape prevalence study included a question about alcohol usage which was worded too broadly, ((The wording of the relevant question in Koss’ study is, “Have you had sexual intercourse when you didn’t want to because a man gave you alcohol or drugs?”)) causing many critics to claim that Koss’ results exaggerated the prevalence of rape. However, a later study by Martin Schwartz and Molly Leggett ((Schwartz, Martin D. and Molly S. Leggett (1999), “Bad Dates or Emotional Trauma? The Aftermath of Campus Sexual Assault,” Violence Against Women 5(3): 251-271.)) tested the question empirically, repeating Koss’ research with but replacing Koss’ original question about alcohol and rape with a much more specifically worded question. What they found is that making the question more specific did not alter the survey’s results. (I discuss Schwartz and Leggett’s study in more detail in this post.)
Sailorman is also probably unaware of the extensive pre-testing researchers conducted to make sure respondents understood the questions, and of NVAW’s “Section J.” The questions we’ve been discussing are screening questions, from “Section F” of the NVAW survey. However, respondents who answered “yes” to “Section F (Rape Victimization)” questions were asked follow-up questions by interviewers, from “Section J (Detailed Rape Report).” ((Although the survey designers used terms like “Section J (Detailed Rape Report),” the survey respondents were not exposed to that sort of cold terminology.)) These features of the NVAW make it likely that if there had been a question that was frequently misunderstood by respondents, leading to false positives, the problem would have been discovered and the question(s) rewritten.
Sailorman also seems concerned that NVAW’s figure for overall rape prevalence, due to the inclusion of attempted rapes, may be an overestimate. But the numbers make it unlikely that this is an important problem. According to NVAW, completed and attempted rape prevalence combined is 17.6% (that is, 17.6% of US women have been victims of completed and attempted rape), while completed rape prevalence is 14.8% and attempted rape prevalence is 2.8%. So even if we assume that fully half of the attempted rapes measured by NVAW were false positives, that would only change the overall rape prevalence number from 17.6% to 16.2%, which is not a difference with any real-world significance.
Why NVAW Probably Underestimates Attempted Rape Prevalence
So that’s why it’s unlikely that NVAW overestimated the prevalence of attempted rape. But why do I suspect NVAW underestimated attempted rape prevalence?
First, because there’s only one screening question about attempted rape. That’s essential, because most experts in the field believe that respondents are most likely to accurately report having been raped if they are asked multiple, behaviorally specific questions about the event. ((Although, curiously, when NVAW tested a subsample of 1,000 women by asking half two questions about rape and the other half four questions, they did not find any significant difference. This conflicts with what other studies, such as the federal government’s study The Sexual Victimization of College Women, have found.)) Indeed, the NVAW’s screening questions about rape were explicitly designed based on this theory. That’s why NVAW asks respondents five separate, specific questions about rape ((Male respondents were only asked four questions. I’ll discuss that more in a future post in this series, focusing on problems with how NVAW approached measuring male victimization.)) — because the designers believe that asking just one broad question causes surveys to significantly underestimate rape prevalence.
And second, because NVAW’s question was not behaviorally-specific. Rather than focusing on a narrow kind of rape, it attempted to ask about “vaginal, oral, or anal sex against your will” in a single omnibus question. But most researchers believe that respondents who have been sexually victimized are more likely to report their experiences if they are asked behaviorally-specific questions. A better-worded question would have asked about only one type of rape (leaving it to other questions to ask about other types), and ideally would have described what was meant by “against your will” in some way.
It’s interesting to compare NVAW’s attempted rape results to Mary Koss’. Koss’ study ((Koss, Mary, et al (1987), “The Scope of Rape,” Journal of Consulting and Clinical Psychology, vol 55 (2) 162-170.)) used multiple, behaviorally-specific questions to measure attempted rape; one of her questions, for example, was “Have you had a man attempt sexual intercourse (get on top of you, attempt to insert his penis) when you didn’t want to by threating or using some degree of force (twisting your arm, holding you down, etc.), but intercourse did not occur?” According to Koss’ study, approximately 12% of college women in the US have experienced attempted rape at some point in their lives; according to NVAW, 2.8% of all US women have experienced attempted rape at some point in their lives. Perhaps the four-fold difference between 12% and 2.8% is caused by differences in the studies; but the very large difference also raises the possibility that NVAW’s methodology has significantly underestimated the prevalence of attempted rape.
If NVAW is repeated in the future — and I certainly hope it will be — study designers should consider using multiple, behaviorally-specific questions to ask respondents about attempted rape. And insofar as it’s possible, those questions should be worded similarly to the questions respondents are asked about completed rape.
The next post in this series will look at how “rape” and “attempted rape” can be defined, and what that means for a survey attempting to measure attempted rape.
Thanks for the link.
But two important bits of clarification:
First, I don’t think that the study actually over-reported attempted rape. (in that post, as a single example, I note “many women would, IMO, have answered “yes” EVEN IF they had the criteria of force, harm, and/or threat”) The REAL LIFE prevalence of AR is, IMO, higher that that reflected in the study. I was going after it on methodological grounds, not on “there arent actually rapes going on” grounds.
I merely didn’t think the methodology was good. However, the links I read didn’t mention the “extensive pre-testing researchers conducted to make sure respondents understood the questions” and that may well render my worries moot. I’ll go and read those this week.
Second: I do think the inclusion of attempted rape stats in rape reporting is very important. You note that
This is only partially true. There’s not much statistically significant difference in those numbers, for sure. And it’s not a if the number of rapes would be “acceptable” if it were “only” 16.2%.
So why argue against it? Why bring it up at all? Because the real world difference is that this is a highly political fight.
Most people have a definition of rape which does not include attempted rape. So in that world, with the ‘standard’ definition, one number is accurate and the other number is inaccurate. THAT means that if you use the larger number, you’re opening yourself up to attack. And once you’re branded as inaccurate, you’re not in a good position to foment change.
And if that is true, then if the numbers aren’t very different (which you seem to agree is the case) that is even LESS of a justification for combining them.
In the anti-rape field and most other areas, I’m a “strength and limited ground” arguer. I’d rather make a more limited argument that I can 100% substantiate, than use a higher number and worry about someone raising a decent attack on it.
Okay. Sorry I misunderstood you.
Well, I agree with you that it’s better to use the “completed rapes” number in casual conversation and blog posts and the like, if only to avoid some pointless arguments, and to avoid accidentally misleading people. Which is why (iirc) I’ve always used the “completed rape prevalence” number whenever I’ve referred to the NVAW survey.
That said, for better or worse the NCVS— which, although it’s really lousy at measuring intimate violence (which includes most rape), is the US’s primary measure of crime rates — uses a measure of rape that combines attempted and completed rape. The NVAW designers most likely wanted to be able to produce a number that could be compared to the NCVS number, and that was a reasonable desire on their part.
They also deliberately designed their survey so that they could report prevalence both including and excluding attempted rape. So they deserve credit for successfully designing their survey so the numbers could be reported both ways.
There are, as you pointed out, conceptual issues in how we define “attempted” rape. I’ll be discussing those in my next post in this series.
Can you suggest which link I should read to find Section J? I’m hoping to have some time this weekend. I didn’t see it in the main study that I had read, so obviously I’m not looking in the right spot.
thx,
s
Just recently saw something which seems like it should be linked (since it’s from here):
The thread ” The 1 in 4 distortion: Where did it come from?” (written by Amp) discusses the shitstorm that has come from people confusing “attempted rape” and “rape.”
I think it’s an excellent example of how a “reach” can backfire. As you note in your own post (and as I fully agree), 1 in 8 is PLENTY “bad enough” to make a fuss about.
NVAW is, IMO, going down the same road as did Koss.