Facebook was criticized by British Prime Minister David Cameron for reversing its policy banning the posting of videos of beheading on the social network. It now says it is working on a method of warning people about violent content. Paul Sakuma/Associated Press
Facebook says it's working on new ways to keep users from stumbling across gruesome content on its website following an outcry over the discovery of beheading videos on the site.
The controversy — which has drawn in British Prime Minister David Cameron — illustrates the difficulty of setting a universal standard across the one-billion-user social network.
Facebook banned beheading videos in May but recently lifted the prohibition.
The issue emerged in April after a video allegedly showing the beheading of a woman by a Mexican drug cartel was posted on Facebook. When some users complained about it, Facebook sent them a response saying it had reviewed the video but found it did not violate the company's community standards on graphic violence.
It stressed to users that people were sharing the video on the social network in order to condemn its content.
"Just as TV news programs often show upsetting images of atrocities, people can share upsetting videos on Facebook to raise awareness of actions or causes," Facebook said in a statement at the time. "While this video is shocking, our approach is designed to preserve people's rights to describe, depict and comment on the world in which we live."
It later changed its position after intense criticism from various interest groups advocating for a safer online environment for children. It decided to temporarily remove decapitation videos that were reported by users while it reviewed its policy.
This week, it emerged that it had decided to revert to its original policy and allow the sharing of violent content.
Cameron, whose right-leaning government has unveiled a range of initiatives to censor objectionable content online, said Tuesday that allowing the videos back on the site was "irresponsible."
Facebook said in a statement that it is working on ways to warn people about the content they might see.