The Post’s headline isn’t quite accurate. For one thing, they weren’t really “bots” (which to me suggests a program operating somewhat autonomously); they were puppet accounts, controlled directly by the researcher, Kevin Munger. From the study’s abstract:
I employ an intervention designed to reduce the use of anti-black racist slurs by white men on Twitter. I collect a sample of Twitter users who have harassed other users and use accounts I control (“bots”) to sanction the harassers. By varying the identity of the bots between in-group (white man) and out-group (black man) and by varying the number of Twitter followers each bot has, I find that subjects who were sanctioned by a high-follower white male significantly reduced their use of a racist slur.
The “sanction” was a tweet saying “Hey man, just remember there are real people who are hurt when you harass them with that kind of language”. Using this tweet, the high-follower white male puppets – and only those puppets – could improve behavior. Tellingly, the same tween from low-follower black male puppets led to increased use of racial slurs.
Surprisingly, anonymous twitter users were the ones whose behavior improved. Non-anonymous users did not reduce their slur usage in response to being criticized. (I would have guessed the opposite.)
It’s a shame that he didn’t use actual bots, since that would be very useful if it worked. However, a bot might have a hard time distinguishing harassing tweets from other tweets (such as a person complaining about having been called a slur).
I guess for the sake of reducing variables, he didn’t test responses to female identities. I hope someone does in a follow up study. It wouldn’t surprise me if female identities, like black identities, were less effective at changing behavior, but I’d be interested to see the numbers.