Wednesday, April 22, 2015

Book Update 2015

For the first time in over a year, I've read two books written by men, both non-fiction. 

The first was Bruce Schneier's Data and Goliath, which I thought was great. The second, which I'm still reading, is Lawrence Wright's Going Clear: Scientology, Hollywood, and the Prison of Belief

Wright uses the "gender neutral masculine" throughout (as Scientology founder L. Ron Hubbard likewise seemed to do in much of his writing and thought) to refer to all human beings. So I'm find that to be pretty jarring after having had a whole year almost entirely free of that.

Don't worry though, next on my docket is Nancy Manahan's Lesbian Nuns: Breaking Silence. All will be well again in my world very soon.


Tuesday, April 21, 2015

Researchers Study Online Antisocial Behavior

I have been saying for awhile now that, the longer I've blogged the more adept I've become at spotting patterns that help me predict when a commenter is going to become A Problem.

Via The Mary Sue, researchers at Cornell and Stanford have analyzed "troll" behavior online, in an article entitled, "Antisocial Behavior in Online Discussion Communities" [PDF here].

I use scare quotes around "troll," as it was used by The Mary Sue and within the study itself, as I find that term to often be used in ambiguous and unclear manners. Oftentimes, the term trivializes what, in reality, is violent and profoundly antisocial rhetoric and disruption to people's communities, forums, and lives.

The researchers, in the above-cited study, examined user behavior on CNN.com, Breitbart.com, and IGN.com, all of which post articles on which users may comment and which ban users found to be disruptive to the community. Analyzing the language in posts, the researchers found differences between who they call "Future Banned Users" (FBUS) and "Never Banned Users."

Some of these differences include:

  • "FBUs tend to write less similarly to other users, and their posts are harder to understand according to standard readability metrics." 
  • "They are also more likely to use language that may stir further conflict (e.g., they use less positive words and use more profanity).
  • "...FBUs make less of an effort to integrate or stay on-topic."
  • FBUs post more than those never banned.
Well, yes, duh.  But, it's nice to see it actually researched, I guess.  Of note, the study also found that communities become less tolerant of someone who is showing antisocial commenting behavior the more frequently that person comments - that is, their later posts are more likely to be deleted than earlier posts even if the later posts are not worse than earlier posts. 

What I've often noticed with respect to that point is that (a) some people actually do escalate their comments when they believe no one is paying attention to them, and (b) when people don't escalate their comments, they will simply post the same thing over and over and over and over again until someone does pay attention to them. Both methods exhaust the tolerance of online communities, for good reason.

I'd like to end with a final note that I found to have particularly interesting future, practical implications to address antisocial online behavior. From the study:
"[We] show that a user’s posting behavior can be used to make predictions about who will be banned in the future. Inspired by our empirical anal- ysis, we design features that capture various aspects of an- tisocial behavior: post content, user activity, community re- sponse, and the actions of community moderators. We find that we can predict with over 80% AUC (area under the ROC curve) whether a user will be subsequently banned. In fact, we only need to observe 5 to 10 user’s posts before a clas- sifier is able to make a reliable prediction. "
Rather than the implementing a comment moderation system that preemptively bans potentially antisocial comments before they're posted, I'm envisioning an automatically-generated message such as, "This post may possibly violate community norms. Does it?", allowing community users to up vote or down vote their opinion (while disallowing multiple votes from the same IP address and user account, similar to Disqus).

Sticking with the definition of antisocial as used in the above study - that is, behavior that deviates from a particular online community's standards - is not perfect (shouldn't some online behavior be considered antisocial in any community?), but would provide for some community-level comment moderation that could lighten the burden on individual moderators.