dailyO
Technology

Why Facebook's revenge porn policy is bizarre and woefully ineffective

Advertisement
DailyBite
DailyBiteMay 24, 2018 | 21:24

Why Facebook's revenge porn policy is bizarre and woefully ineffective

With the lines between our physical and online worlds blurring by the day, "revenge porn" indubitably shares space with data privacy and misinformation at the top shelf of grave concerns. For the uninitiated (though at this point, it is hard to imagine there are netizens who have not come across the term) "revenge porn" (though the colloquially used misnomer “MMS” gained currency back in the mid-2000s in India in the aftermath of the “DPS Scandal”) refers to the distribution of sexually explicit images or videos without the consent of the individuals they depict.

Advertisement

Revenge porn, as the name would suggest, comes with an obvious motive: blackmailing individuals into performing sex acts or continuing a relationship; or just as a means to harm a person’s reputation.

There are some who believe that "revenge porn" is a strictly "Western" problem. These claims can, of course, be easily rubbished. A 2016 survey conducted by the Cyber & Law Foundation, an NGO in India, found that 27 per cent of internet users aged 13 to 45 years have been subjected to such situations.

The battle against the spread of "revenge porn", over recent years, has been largely taken up by a lot of tech giants. Social media frontrunners like Facebook and Reddit as well as websites like Google have incorporated policies about "revenge porn". Incidentally, Reddit’s 2015 policy for banning “involuntary pornography” came months after “The Fappening”, when a large collection of celebrity nude pictures — apparently stolen from hacked cloud storage on the victims' phones — was released.

Victims of the release included Jennifer Lawrence, Jenny McCarthy, Rihanna, Kate Upton, Mary Elizabeth Winstead, Kirsten Dunst, Ariana Grande and Victoria Justice.

Even pornographic websites have made an active effort to filter out revenge/non-consensual pornographic content and have tools for people to flag such videos should they still get uploaded.

Advertisement

In 2017, Facebook too, in the midst of an onslaught of criticism over fake news, messy algorithms, echo chambers, polarisation and data privacy, launched a programme intended to combat a growing problem on the platform, despite the strict (almost comically so) community guidelines on the social media website. Facebook restricts the display of nudity or sexual activity. According to the site's guidelines: "While we restrict some images of female breasts that include the nipple, we allow other images, including those depicting acts of protest, women actively engaged in breast-feeding, and photos of post-mastectomy scarring. We also allow photographs of paintings, sculptures, and other art that depicts nude figures." 

The method employed by Facebook, however, doesn’t inspire a lot of confidence, especially now.

reevnge-porn_052418025928.jpg
Better think twice about hitting the send button on those nudes. [Photo: DailyO]

Facebook, partnering with the Australian government’s e-Safety Commissioner, started developing a new system for combating the spread of non-consensual explicit media. The plan was, if you found yourself fearing the release of your private photographs (that you may have sent someone consensually), you could upload the same photos on a special tool on Facebook. Facebook would then digitally “hash” the media, and track it using the same artificial intelligence-based technologies it uses in its photo and face matching algorithms, and then prevent the data from being uploaded and shared in the future.

Advertisement

According to Vox, a piece of technology called perceptual image hashing — the same used for reverse image searches on Google and keeping track of child pornography by law enforcement agencies — would allow Facebook to control the spread of sexually explicit images; if one one can pre-emptively predict their release.

The programme was expanded to the US, the UK, and Canada on May 22, 2018.

The problems

The Vox report noted that the process by which users of the pilot programs have been asked to submit their photographs is “convoluted and laborious”. Not only do users first have to fill out a form through one of Facebook’s partner networks in each of its four pilot countries, they also have to consent to other human eyes taking a peek at the sexually explicit images in question: “a specially trained representative from [their] Community Operations team.”

One concern, especially in the aftermath of the Cambridge Analytica exposé, has been that of data leaks. If the private Facebook data of millions were easily siphoned off by a voter profiling firm, what is to stop these images from being vulnerable to leaks, from yet another platform or source? Thankfully, that should not be the case. As per reports, Facebook will not store the images, only their digital image hash — basically a unique identifying code for each photo submitted, which is what they essentially need to identify if the said images are attempted to be uploaded.

There is still a catch. Facebook does not immediately delete the images. The team deletes the image from the database within a week of its submission. In fact, one has to take Facebook's word for such a promise. There is, after all, no way to ascertain whether or not it has deleted the image.

The silence over WhatsApp

And while these are nascent issues that may be refined in the months and years to come, to make the process more secure and effective, what Facebook chooses to not mention or acknowledge is another platform it owns: WhatsApp.

Facebook’s instant messaging application WhatsApp has a whole host of problems surrounding it, especially in India. A proper cesspool of misinformation and propaganda, WhatsApp’s instant reach allows for another big problem to fester within its confines: the mass-sharing of sexually explicit (and often non-consensual) images and videos.

Despite the fact that even consensual sharing of sexually explicit images over instant messaging applications and services (like WhatsApp, Telegram, Kik, WeChat and Signal) can be illegal — Section 67 in the Information Technology Act, 2000, states that “whoever publishes or transmits or causes to be published or transmitted in the electronic form, any material which is lascivious or appeals to the prurient interest or if its effect is such as to tend to deprave and corrupt persons who are likely, having regard to all relevant circumstances, to read, see or hear the matter contained or embodied in it, shall be punished...” — people, either unaware or unconcerned, continue to exploit the platform for the very same. 

Last year, the Kerala police busted a child pornography ring that operated on Telegram. The group used to share videos of child pornography, child sexual abuse and even child rape on the instant messaging app.

So how does Facebook's new policy help curb this?

The short and problematic answer is: it doesn't.

Unless explicitly reported, these transgressions continue to take place under the comfort of encryptions and privacy policies. In any case, even if Facebook does manage to incorporate the same image hashing tech on WhatsApp, the possibilty of preventing the spread of "revenge porn" there will only be limited to images and videos that have been flagged pre-emptively. 

Facebook's present attempts, though admirable in spirit, offer little respite to those who have been filmed or photographed non-consensually.

It's akin to dousing fires gripping a burning house with little more than a glass of water — one that you have to produce on your own. 

Last updated: May 24, 2018 | 22:08
IN THIS STORY
Please log in
I agree with DailyO's privacy policy