The headlines this week in the realm off college sexual assault have been about the
UVA resolution and
the new survey about the rate of sexual assault on campuses. Regarding
the latter, the ones I have seen have looked like this: "1 in 4 college
women report being sexually assaulted."
I dislike
headlines like this because they are what people remember. They are
easy. And they are easy to refute by those who wish to diminish the
severity of this problem and/or blame the victims. Because headlines
like this are never truly accurate. Because what sound study can be
summed up with one statistic? There are always limitations.
This,
I realize, is a disconnect between the interests of the media and the
interests of researchers and research institutions. It is not likely to
change. Headlines will not become more nuanced and reflective of actual
findings.
I proceed regardless. I proceed in part
because the last study's one remembered statistic is still with us: 1 in
5. It is used in endless articles about the topic, documentaries,
conversations among advocates, policymakers, and in political speeches.
And again, those who wish to refute those numbers can do so because the
study was not perfect. It was small. It surveyed students at 2
universities--large universities, but two--in different geographic
regions. It was a web-based survey which yielded a low response rate. It
was typical of web surveys but low in comparison to other data
collection methods. But the published study says that the results are
not generalizable to other universities.
One of the researchers, as recently as last year, has said to the media that
the 1 in 5 statistic is being used out of context. It has not been
recognized for what it was: a foundation. A request for more
information. A call for additional research. It was not an ideal study.
No study is. But headlines do not say that and neither
do--usually--paragraphs one through three. The asterisks come later in
these reports.
So now we have a new study. The
Association of American Universities commissioned a study to look at
rates of sexual assault at its member schools. This is where the new 1
in 4 statistic comes from. Here are the asterisks:
- Small sample, low response rate. Only 26 schools participated. The response rate was less than 20%. The report speaks to non-response bias and suggests that the 1 in 4 could be over inflated because those who experienced or were affected by sexual assault could be more likely to fill out a survey about it.
- Definition of sexual assault. As the articles did get to, the
definition used by the survey was "broad." That means comparisons among
other studies that do not use the same definition or among individual
schools may not be possible. (More on why the so-called broad definition
is a good thing below.)
Here is what I take from the study:
- The broad definition is good because it accounts for a range of
behaviors and actions, which is important to people who have experienced
sexual assault. The hierarchy that is created among different acts and
between sexual assault and harassment is unproductive and arguably
damaging. With the prevalent belief that being penetrated by a penis is
the only definition of sexual assault, some victims are left wondering
whether digital penetration or forced oral sex counts or that they
should get over it because "it could have been worse." Additionally, the
study reported the numbers from different categories of assault. The 1
in 4 is inclusive; but that number is broken down in the report.
- The concern by some students that the study was too explicit in its
description of sexual assault is connected to the confusion I speak of
above regarding the question of "what counts." It is also concerning
because it speaks to how difficult it is to actually talk about and
describe what happens in actual sexual assault. And if people find that
difficult to do in an anonymous survey....There are clearly implications
here about the difficulties of reporting and the need for really good
training for those who are handling reports provided by victims.
Before I write what I am going to write, I want to
acknowledge that this study is a good thing in that it attempts to
discover patterns about what is happening on campuses. There has been
critique that the schools that participated do not have to release the findings particular to their campuses. (Some of them have and will.) But the goal of the AAU study was not to condemn or humiliate individual schools, it was to discover the proverbial bigger picture. School themselves should be going beyond quantitative surveys with low response rates and response bias. They should always be in the process of assessing the campus climate.
That being said: does it matter that this number is different from the
long-reported 1 in 5?
If you told a young
woman that she had a 20% chance of being sexually assaulted versus a 25%
chance would it make that much of a difference to her? To her parents? 1 in 4 versus 1 in 5. It is not more or less of an epidemic. It is does not (or should not) provide more justification for training programs in bystander intervention.
Perhaps it matters in the way 1 in 5 mattered: to politicians and activists who cite it as a call to action. Maybe if it matters if 1 in 4 means Congress will appropriate more money to hiring OCR staff or if foundations earmark more money for studies of college sexual assault. But I would imagine that every congressional hearing or meeting where this new study and statistic is cited, there will be opponents who say that the number of wrong, the study problematic. And that is what I fear. Because that takes attention away from the likely reality for that one woman, in a group of 4 (or 5 or maybe even 6!) of her female peers, who will--statistically speaking--will be sexually assaulted during her time on campus.