Thursday, April 14, 2016

Guardian Series on Internet Harassment

This week, The Guardian has been running a series of articles on Internet harassment, bullying, and abuse.

In researching comments at its own site, The Guardian found that (in news that will surprise no one) women were disproportionately likely to receive abusive comments following their articles.  Because that finding is so unsurprising, I instead want to highlight two issues:

(1) Moderating Requires Human Judgment - At the end of the above-linked article, readers are invited to "play moderator" and use their own judgment to decide which sample comments they would choose to block. After making a selection, the site then shows its moderators' judgment and reasoning for whether or not the comment was blocked.

To me, the exercise is good as it shows that, in practice, comment moderation is more of an art than a science. Even when a site has a comment policy, choosing which comments to allow and block is not a simple matter of plugging the comment into an algebraic formula. For instance, sample comment 4 pertained to antisemitic conspiracy theories. The comment itself was (content note: antisemitism):
"I don’t think that pointing out the disproportional political influence Jews have in most western societies can be called a conspiracy. But branding people that point it out and labelling them anti-Semitic seems to me part of a conspiracy.
Here, the post did not explicitly engage in name-calling and the tone could be perceived as civil.  Yet, to moderate it appropriately requires the application of human judgment and context. The Guardian moderators blocked it because "suggesting Jewish people have disproportional influence in politics is an antisemitic trope with a long history" and it "suggests antisemitism doesn't really exist except as a way to silence people."

Thus, knowledge of that historical context is important. Moderators without that knowledge would allow that comment (thus tacitly implying that engaging an antisemitic trope is not a breach of civility?). And, likewise, commenters without that knowledge would not understand why the comment was uncivil. Such commenters often end up feeling "censored" and as though the forum is biased against what they view as their "legitimate but equally-valid, dissenting" view.

I cannot stress this enough: This happens all. the. time. 

And, my response is the same as it was almost four years ago - if effective dialogue is the goal of an Internet forum, all sides must approach the conversation sincerely trying to understand other points of view and relevant  historical contexts. That is an incredibly hard thing to monitor in large Guardian-like forums. So, I think it is a fair practice to block comments that play into bigoted tropes, even if the commenter is not intending to be uncivil.

2)  Addressing the Problem - We are beyond the point where it is acceptable for companies to merely point out that the problem of Internet harassment exists.

As I've been saying for years now, it's time for the creators of platforms to be accountable for seriously addressing this issue, and for law enforcement to become up to date in addressing technology-based harassment.  If someone is Tweeting death threats to a woman, it is unacceptable for her to file a police report with an officer who doesn't know what Twitter is.

I found Mary Hamilton's Guardian piece on the issue to be thoughtful. A snippet:
"The issue of comments on news sites is often conflated with conversations about free speech - about the ability of individuals to speak their minds without fear of government censorship. But, as we do with the stories we publish, the Guardian can and should make decisions about the tone of the conversations we want to see flourish here. Allowing freedom for some means effectively silencing others - and deciding to let everyone speak regardless of what they say is, in effect, a statement that abuse is acceptable. Moderation is not censorship, any more than editing is - it’s a careful process that aims to curate the best of the web and allow expert voices and thoughtful discussion to emerge."
She adds that The Guardian will be implementing policies and procedures to protect staff from abuse. Good. The Internet has operated as the (somewhat) Wild West for so long that I think many people have a sense of entitlement to being able to wantonly share their opinions on any site they want, without regard for whether their voice fits within the culture or community the site owners are trying to create.

Yet, whether intentionally or not, if you're creating content on the Internet, you're at least in part creating community. Even if you comment at a blog, you are not a passive consumer, but a user. It's up to all of us users as to what kind of communities we want to create and be a part of (with a larger burden on site owners).

To bring this concept back home, since it's come up again recently in comments here, I would like to re-iterate that I primarily write for a feminist audience.  I don't care if anti-feminists don't agree with me, and I don't care if they feel I've ever unfairly stifled their precious viewpoints here.  Because many anti-feminist comments play into sexist tropes against women and feminists, even if that's not the intent of anti-feminist commenters, their comments are often not civil even if they're not calling me a cunt or threatening rape (although they've done both here).

I have and will engage with them here from time to time (as my own time permits), but to be honest, I think there is little that anti-feminists have to teach me about sexism, as they rarely show sufficient understanding of the topic themselves. That may legit sound condescending, but to be honest, I've rarely not been condescended to myself, when interacting with anti-feminists who think they have lots to teach me and other women about how the world works.

Besides, there are a lot of femslash fan videos on YouTube that aren't going to watch themselves, so. There's that. Life priorities are important.

No comments: