The Facebook Anti-Suicide Algorithm

Following on from recent scandals surrounding the live-streaming of suicides on Facebook, the company last year launched an ambitious project to use artificial intelligence to detect when users might be likely to commit suicide, or otherwise self-harm.

The algorithm analyses almost all posts made on Facebook, and rates each one with according to the probability that the post represents a likelihood of imminent harm. (Probability scores are rated on a scale of 0 (no probability) to 1 (certainty).)

Privacy Concerns

Given the number of privacy scandals that the platform has recently faced, the fact that Facebook is generating and storing what amounts to mental health data on users, without their consent, is proving concerning to a number of industry privacy experts.

Natasha Duarte, of the Center for Democracy and Technology, was quoted as saying “I think this should be considered sensitive health information. Anyone who is collecting this type of information or who is making these types of inferences about people should be considering it as sensitive health information and treating it really sensitively as such.”

Increasing the concern is the fact that Facebook has not been particularly transparent about the data it collects in this regard. Although they have stated that low-risk scores are deleted after 30 days, they have not provided any information about how long, or in what form, higher-risk scores (and any subsequent interventions) are stored.

How Self-Harm Risks Are Handled

Once the algorithm identifies a post which may be linked to a potential suicide risk, the post is sent to Facebooks content moderators. Facebook claims that these moderators are trained to accurately screen posts for suicide risk, despite these moderators in the past having been described as being poorly trained overall.

If the content moderator believes a post is high-risk, it is escalated to a response team with backgrounds (according to Facebook) in law enforcement, hotline counselling, etc. These team members have more information available on the user whose post is being reviewed, and can initiate various types of intervention.

Data Protection Regulations

Although many experts in the field have agreed that this algorithm has the potential to be a valuable tool, concerns over the collection and use of personal information have led to Facebook being unable to implement the algorithm to scan posts from users in the EU.

The recently promulgated GDPR (General Data Protection Regulation) laws in the EU requires that users give websites explicit permission to collect sensitive data means that they cannot make use of it in those regions, especially with no way to opt into or out of the program.

Balancing Risk

According to Facebook, they believe that the risk of their invasion of privacy is worth the potential opportunity to be able to help somebody at a critical time. According to their statement, they “have decided to err on the side of providing people who need help with resources as soon as possible. And we understand this is a sensitive issue so we have a number of privacy protections in place.”