dailyO
Technology

Twitter's suicide prevention policy is flawed and dangerous

Advertisement
DailyBite
DailyBiteApr 06, 2018 | 22:16

Twitter's suicide prevention policy is flawed and dangerous

Twitter, a social media platform that has long struggled to curb harassment and abuse, finds itself unable to deal with suicide prevention.

In February, Twitter had announced, “While we continue to provide resources to people who are experiencing thoughts of self-harm, it is against our rules to encourage others to harm themselves. Starting today, you can report a profile, Tweet, or Direct Message for this type of content.”

Advertisement

Titled “Glorifying self-harm and suicide”, Twitter's new policy seeks to discourage suicide or self-harm. “When we receive reports that a person is threatening suicide or self-harm, we may take a number of steps to assist them, such as reaching out to that person and providing resources such as contact information for our mental health partners,” it states.

The rationale behind it is to support “people who are undergoing experiences with self-harm or suicidal thoughts, but prohibiting the promotion or encouragement of self-harming behaviours”.

On paper, it seems like a good approach. After all, self-harm on social media is a fast-growing and dangerous trend. According to Nadine Kaslow, the former president of the American Psychological Association, acts of self-harm, especially suicide, are doubly harmful on social media because of the effect they have on those who turn into an unwitting audience. They tend to give others who are struggling with the choice of self-harm a greater sense of "acquired capability" — the idea that "if you can do it, I can do it".

A modern-day contagion.

But Twitter’s inept handling of the issue is evident to the point of hilarity. The implementation of this new policy has, in fact, given rise to a disturbing trend: Twitter has been temporarily suspending and locking out accounts merely for using the words “kill me” even if the words are not used in a suicidal context.

Advertisement

For example, if you are stuck in traffic, and you tweet about it to vent out your frustration but make the mistake of adding the words “kill me”, chances are you might find yourself suspended for a short period that may last from 12 hours to a whole week.

twitter-suicide_040618064415.jpg
Photo: DailyO

In fact, even more than five-year-old tweets can be subjected to suspension.

While this is a global problem, Twitter's solution is anything but global. Despite the fact that suspended accounts are told to visit the Twitter Safety Centre, which is supposed to have a list of resources aimed at helping those who suffer from mental health problems, what is worrisome is that in India, they have no such partners.

What is equally worrisome is Twitter’s methodology. On the one hand, Twitter told a tech journalist that “context is very important when it comes to tweets using language pertaining to self-harm”. The website has assured it does not “remove these [not pertaining to self-harm or suicide] tweets in an automated way”, relying on "bystander reporting" and "human review". While this implies Twitter has a mechanism in place that automatically detects self-harm tweets and only suspended accounts when it is manually reported, the microblogging site’s actions tell a different story.  

Advertisement

Case in point:

These are but a few examples. Twitter has locked out or suspended many more accounts because of the phrase “kill me” used in the most innocuous ways.

Why is this alarming? 

Per Twitter’s own claim, it relies on human review. If these tweets can be found violating the website's terms, then there are two disturbing explanations: one, Twitter automatically suspends accounts that are reported; two, those checking the reported tweets are doing an exceptionally bad job.

In fact, the first explanation can be ruled out. Post this trend, several users have tried to test Twitter’s suicide prevention policy by tweeting the phrase “kill me” and having it reported by others. Many accounts, more specifically mostly male and verified handles, have not found themselves locked out or suspended.

Tech Dirt, in a report, pointed out another possible explanation for this selective suspension: “Not everyone who tweets ‘kill me’ is getting suspended, so at least the algorithm is slightly more sophisticated than that. One explanation given is that when a user is reported for certain reasons, the system then searches through past tweets for specific keywords. Perhaps that works in some contexts, but clearly not all of them.”

But where does this leave Twitter?

A flawed self-harm prevention programme and the existing menace of online harassment and abuse are a lethal combination. The menace itself can be a cause for suicide. In fact, in December, 2017, 23-year-old porn actress Mercedes Grabowski — more popularly known as August Ames — was found hanging in her California home, a day after Ames was cyber-bullied. The actress, after turning down an offer to work with a male, bisexual adult film actor, was branded a homophobe on Twitter by a number of people.

It is more difficult to report armies of anonymous accounts and trolls that mass-abuse people on a daily basis than to have someone suspended for the casual phrase “kill me”. 

It is high time one of the world’s most popular social media sites took more responsibility.

Last updated: April 06, 2018 | 22:36
IN THIS STORY
Please log in
I agree with DailyO's privacy policy