How a suicide video on Facebook Live went viral on TikTok

Both sites are ill-equipped to handle the spread of harmful content.

Dado Ruvic / Reuters

On August 31st, US Army veteran Ronnie McNutt took his own life while streaming live on Facebook. The stream has since been taken down, but not before the video was reposted and shared on other sites, ultimately going viral on TikTok earlier this week. Combine Facebook’s inability to stop the stream and TikTok’s struggle with viral content, and it’s clear that their efforts at stopping harmful content from spreading isn’t working.

The issue originated with Facebook, according to McNutt’s friend Josh Steen, who is currently leading a #ReformForRonnie campaign to hold social media companies accountable. Steen told Snopes that he had reported the livestream to Facebook while McNutt was still very much alive, but did not hear back from the company until it was too late. Even then, he received an automated response that stated the video did not violate community standards.

“Ronnie had been deceased for almost an hour and a half when I got the first notification from Facebook that they weren’t going to take down the video [...] what the hell kind of standards is that?” Steen told Snopes.

Earlier this week, Facebook issued the following statement: “We removed the original video from Facebook last month on the day it was streamed and have used automation technology to remove copies and uploads since that time.”

Bangkok, Thailand - August 22, 2019 : iPhone 7 showing its screen with TikTok and other social media application icons.
Bangkok, Thailand - August 22, 2019 : iPhone 7 showing its screen with TikTok and other social media application icons. (Wachiwit via Getty Images)

Later, on September 10th, the company informed Snopes that the video was up on the site for two hours and 41 minutes before it was removed. “We are reviewing how we could have taken down the livestream faster,” it said in a statement. Those two hours and 41 minutes, Steen told Snopes, isn’t fast enough of a response, and is completely unacceptable as friends and family were impacted by the video.

During that time, the video was reposted on other Facebook groups and, according to Vice, spread to fringe forums like 4chan. Users of those sites then reshared the video on Facebook, as well as other places like Twitter and YouTube. But it is on TikTok where the video really went viral.

One of the potential reasons for this spread is TikTok’s algorithm, which is also often credited for the app’s success. TikTok’s main feature is its For You page, a never-ending stream of videos tailored specifically for you, based on your interests and engagement. Because of this algorithm, it’s often possible for complete unknowns to go viral and make it big on TikTok, while they might have trouble doing so on other social networks.

In a blog post published this June, TikTok said that when a video is uploaded to the service, it is first shown to a small subset of users. Based on their response -- like watching the whole thing or sharing it -- the video is then shared to more people who might have similar interests, and then that feedback loop is repeated, leading a video to go viral. Other elements like song clips, hashtags and captions are also considered, which is often why users add the “#foryou” hashtag in order to get on the For You page -- if people engage with that hashtag, then they could be recommended more videos with the same tag.

Tyumen, Russia - January 21, 2020: TikTok and Facebook application  on screen Apple iPhone XR
Tyumen, Russia - January 21, 2020: TikTok and Facebook application on screen Apple iPhone XR (Anatoliy Sizov via Getty Images)

In other words, by using certain popular song clips, hashtags and captions, you could potentially “game” the TikTok algorithm and trick people into watching the video. Though TikTok hasn’t said that’s what happened in this case, that’s certainly a possibility. It’s also entirely possible that as the story of the video got around, people might have simply searched for the video on their own to satisfy a morbid curiosity, which in turn prompts it to get picked up on the For You page again and again.

TikTok, for its part, has been working to block the video and take it down since it started cropping up on Sunday. In a statement it said:

Our systems, together with our moderation teams, have been detecting and removing these clips for violating our policies against content that displays, praises, glorifies, or promotes suicide. We are banning accounts that repeatedly try to upload clips, and we appreciate our community members who've reported content and warned others against watching, engaging, or sharing such videos on any platform out of respect for the person and their family. If anyone in our community is struggling with thoughts of suicide or concerned about someone who is, we encourage them to seek support, and we provide access to hotlines directly from our app and in our Safety Center.

But the company is having a difficult time. Users kept figuring out workarounds, like sharing the video in the comments, or disguising it in another video that initially seems innocuous.

At the same time, however, TikTok has seen a surge of videos that aim to turn people away from the video. Some users as well as prominent creators have taken to posting warning videos, where they would say something like “if you see this image, don’t watch, keep scrolling.” Those videos have gone viral as well, which the company seems to support.

As for why people stream these videos in the first place, unfortunately that’s somewhat inevitable. “Everything that happens in real life is going to happen on video platforms,” said Bart Andrews, the Chief Clinical Officer of Behavioral Health Response, an organization that provides telephone counseling to people in mental health crises. “Sometimes, the act is not just the ending of life. It’s a communication, a final message to the world. And social media is a way to get your message to millions of people.”

“People have become so accustomed to living their lives online and through social media,” said Dan Reidenberg, the executive director of suicide non-profit organization SAVE (Suicide Awareness Voices of Education). “It’s a natural extension for someone that might be struggling to think that’s where they would put that out there.” Sometimes, he said, putting these thoughts on social media is actually a good thing, as it helps warn friends and family that something is wrong. “They put out a message of distress, and they get lots of support or resources to help them out.” Unfortunately, however, that’s not always the case, and the act goes through regardless.

It is therefore up to the social media platforms to come up with solutions on how to best prevent such acts, as well as to stop them from being shared. Facebook is unfortunately well acquainted with the problem, as several incidents of suicide as well as murder have occurred on its live streaming platform over the past few years.

Angry reactions are seen on local media Facebook live as Palang Pracharat Party leader Uttama Savanayana attends a news conference during the general election in Bangkok, Thailand, March 25, 2019. REUTERS/Soe Zeya Tun
Angry reactions are seen on local media Facebook live as Palang Pracharat Party leader Uttama Savanayana attends a news conference during the general election in Bangkok, Thailand, March 25, 2019. REUTERS/Soe Zeya Tun (Soe Zeya Tun / reuters)

Facebook has, however, taken steps to overcome this issue, and Reidenberg actually thinks that it’s the leader in the technology world on this subject. (He was one of the people who led the development of suicide prevention best practices for the technology industry.) Facebook has provided FAQs on suicide prevention, hired a health and well-being expert to its safety policy team, provided a list of resources whenever someone searches for suicide or self-harm, and rolled out an AI-based suicide prevention tool that can supposedly detect comments that are likely to include thoughts of suicide.

Facebook has even integrated suicide prevention tools into Facebook Live, where users can reach out to the person and report the incident to the company at the same time. However, Facebook has said it wouldn’t cut off the livestream, because it could “remove the opportunity for that person to receive help.” Though that’s controversial, Andrews supports this notion. “I understand that if this person is still alive, maybe there’s hope, maybe there’s something that can happen in the moment that will prevent them from doing it.”

But unfortunately, as is the case with McNutt, there is also the risk of exposure and error. And the result can be traumatic. “There are some instances where technology hasn’t advanced fast enough to be able to necessarily stop every single bad thing from being shown,” Reidenberg said.

“Seeing these kinds of videos is very dangerous,” said Joel Dvoskin, a clinical psychologist at the University of Arizona College of Medicine. “One of the risk factors for suicide is if somebody in your family [died from] suicide. People you see on social media are like members of your family. If somebody is depressed or vulnerable or had given some thought to it, [seeing the video] makes it more salient as a possibility.”

A man is silhouetted against a video screen with an Facebook logo as he poses with an Dell laptop in this photo illustration taken in the central Bosnian town of Zenica, August 14, 2013. REUTERS/Dado Ruvic (BOSNIA AND HERZEGOVINA - Tags: BUSINESS TELECOMS)
A man is silhouetted against a video screen with an Facebook logo as he poses with an Dell laptop in this photo illustration taken in the central Bosnian town of Zenica, August 14, 2013. REUTERS/Dado Ruvic (BOSNIA AND HERZEGOVINA - Tags: BUSINESS TELECOMS) (Dado Ruvic / Reuters)

As for that AI, both Reidenberg and Andrews say that it just hasn’t done a great job at rooting out harmful content. Take, for example, the failure to identify the video of the Christchurch mosque shooting because it was filmed in first-person or simply the more recent struggle in spotting and removing COVID-19 misinformation. Plus, no matter how good the AI gets, Andrews believes that bad actors will always be one step ahead.

“Could we have a completely automated and artificial intelligence program identify issues and lock them down? I think we’ll get better at that, but I think there’ll always be ways to circumvent that and fool the algorithm,” Andrews said. “I just don’t think it’s possible, although it’s something to strive for.”

Instead of relying solely on AI, both Reidenberg and Andrews say that a combination of automated blocking and human moderation is key.  “We have to rely on whatever AI is available to identify that there might be some risk,” Reidenberg said. “And actual people like content moderators and safety professionals at these companies need to try to intervene before something bad happens.”

As for newer social media companies, they too need to think proactively about suicide. “They have to ask how they want to be known as a platform in terms of social good,” Reidenberg said. In TikTok’s case, he hopes that it will join forces with a company like Facebook which has a lot more experience in this area. Even if the video was streamed on Facebook, it didn’t go viral on Facebook because the company managed to lock it down (The company could’ve still done a much better job at being more proactive at taking it down much earlier than it did).

TikTok closeup logo displayed on a phone screen, smartphone and keyboard are seen in this multiple exposure illustration. Tik Tok is a Chinese video-sharing social networking service owned by a Beijing based internet technology company, ByteDance.  It is used to create short dance, lip-sync, comedy and talent videos. ByteDance launched TikTok app for iOS and Android in 2017 and earlier in September 2016 Douyin fror the market in China. TikTok became the most downloaded app in the US in October 2018. President of the USA Donald Trump is threatening and planning to ban the popular video sharing app TikTok from the US because of the security risk. Thessaloniki, Greece - August 1, 2020 (Photo by Nicolas Economou/NurPhoto via Getty Images)

“Any new platform should start from the lessons from older platforms. What works, what doesn’t, and what kind of environment do we want to create for your users,” Andrews said. “You have an obligation to make sure that you are creating an environment and norms and have reporting mechanisms and algorithms to make sure that the environment is as true to what you wanted to be as you can make it. You have to encourage and empower users when they see things that are out of the norm, that they have a mechanism to report that and you have to find a way to respond very quickly to that.”

The answer might also lie in creating a community that takes care of itself. Andrews, for example, is especially heartened by the act of the TikTok community rising up to warn fellow users about the video. “It’s this wonderful version of the internet’s own antibodies,” he said. “This is an example where we saw the worst of the internet, but we also saw the best of the internet. These are people who have no vested interest in doing this, warning others, but they went out of their way to protect other users from this traumatic imagery.”

That’s why, despite the tragedy and pain, Andrews believes that society will adapt. “For thousands of years, humans have developed behavior over time to figure out what is acceptable and what isn’t acceptable,” he said. “But we forget that technology, live streaming, this is all still so new. The technology sometimes has gotten ahead of our institutions and social norms. We’re still creating them, and I think it’s wonderful that we’re doing that.”

In the U.S., the National Suicide Prevention Lifeline’s # is 1-800-273-8255. Crisis Text Line can be reached by texting HOME to 741741 (US), 686868 (Canada), or 85258 (UK)