How Facebook Handled A Fake Photo Of Mark Zuckerberg In A Nazi Uniform

Earlier This specific year, a simple Facebook search for Mark Zuckerberg’s name returned an unexpected result: an image of the Facebook founder in Nazi uniform presented at the top of his photos, directly underneath his verified profile. The photoshopped picture left Facebook with two undesirable choices: which could either delete the inflammatory photo along with risk accusations of censorship, or willingly host an image of its CEO in a Nazi outfit.

Neither option was particularly appealing for Facebook, although the situation was not unfamiliar. Though some of the company’s content moderation decisions have clear rationales — removing child pornography, for example — many of those which faces are similar its fake Zuckerberg picture problem: dilemmas with no easy answers along with potentially fraught consequences.

These tough choices, along with the philosophy with which Facebook approaches them, are becoming increasingly important currently which the company will be from the midst of an unprecedented escalation of its content moderation efforts. The 2 billion–user platform will be from the process of hiring 4,000 brand-new moderators following intense public scrutiny over its bungled handling of violent content, fake news, along that has a Kremlin-backed effort to sow discord from the US during an election year. Facebook’s human moderators will currently be bringing the company’s values to bear on more decisions about content which falls in gray areas, a fact often lost from the discussion of the need for the company to do better.

For an increasingly interventionist Facebook, currently comes the hard part: figuring out just how to wrangle the most difficult content problems on its vast platform — racism along with hate speech, misinformation along with propaganda, Mark Zuckerberg photoshopped into a Nazi uniform.

“This specific will be truly a globally diverse population along with people across the earth are going to have truly different ideas about what will be appropriate to share online, along with where we should draw those lines,” Facebook policy head Monika Bickert told BuzzFeed News.

Of the many complex issues Facebook faces, some of the thorniest emerged during the November hearings in Washington when Facebook, Google, along with Twitter were called into Congress to discuss Russia’s manipulation of their platforms during the 2016 presidential election. Amid intense grilling by lawmakers, the platforms’ lawyers repeatedly promised to do better.

although just how Facebook should deliver on which promise remains a major question. “There are going to be innumerable dilemmas which will not have easy answers,” Rep. Adam Schiff, the ranking member on the House Intelligence Committee, told BuzzFeed News following the hearings. “Even with an outfit like [Russian television network] RT, the questions are going to be difficult.”

RT has been called “the Kremlin’s principal international propaganda outlet” by the US intelligence community, producing which a hot potato for Facebook, which was upbraided by Congress for enabling the Russians’ efforts to disrupt the 2016 election. Facebook does hold the power to limit the spread of RT content on its platform, although a decision to do so will be fraught. Allowing RT’s content to move through its network effectively turns Facebook into something of a propaganda delivery mechanism. although silencing RT would likely have consequences too, dealing a blow to free expression on a platform which hosts a great deal of public discussion.

Kyle Langvardt, an associate professor at University of Detroit Mercy Law who studies the First Amendment, warned of the danger of removing political content by a platform like Facebook. “If private companies are deciding what materials can be censored or not, along with they’re controlling more along with more of the public sphere, then we essentially have private companies regulating the public sphere in a way we would likely never accept by the state,” he said.

“The fact which these companies aren’t the government makes what they’re doing even more disturbing.”

As Langvardt indicated, the problems which might arise by Facebook regulating political speech are troubling indeed. The platform will be used by 2.07 billion people each month, along with its News Feed has become a de facto town square, a place where 45% of Americans say they get news, according to the Pew Research Center. There are few checks on Facebook’s power to remove content; which posts no public record of the content which removes, producing which nearly impossible for third parties to hold which accountable for moderation decisions. “In a lot of ways, the fact which these companies aren’t the government makes what they’re doing even more disturbing,” Langvardt said. “If they were the government at least they’d be accountable to political process.”

Facebook does at least appear interested in more transparency measures. “We want to be more transparent about the ways we enforce against problematic content on Facebook, along with we’re looking at ways to do which going forward,” a spokesperson said.

Currently, Facebook isn’t treating RT differently by additional content on its platform, Bickert said. “Their relationship with their government will be not a disqualifier to us,” she said. “If they were to publish something which violated our policies, we would likely remove which.” Facebook will be also continuing to allow RT to advertise, unlike Twitter, which banned the publication by its ad platform after offering which 15% of its 2016 US elections ad inventory for $3 million.

Fake news — another topic discussed in those November hearings in DC — may be an even trickier area for Facebook, especially currently which critics have alleged false information on the platform may have contributed to the murder of thousands of Myanmar’s Rohingya ethnic group. When news reports of the events surfaced, people accused Facebook of facilitating genocide; some called for the company to set up emergency teams to deal with the issue. although here too, a proper approach isn’t immediately apparent. Had Facebook deployed an emergency team to delete false content about Myanmar, which would likely have had to make blunt judgement calls on a conflict thousands of miles away by its headquarters. along with indeed, when Facebook did finally take action along with removed some posts documenting military activity in Rohingya villages, Rohingya activists accused which of erasing evidence of an ethnic cleansing.

Philosophically, Facebook doesn’t particularly want to remove false content. “I don’t think anyone wants a private company to be the arbiter of truth,” Bickert said. “People come to Facebook because they want to connect with one another. The content which they see will be a function of their choices. We write guidelines to make sure we’re keeping people safe. We want our community to determine the content which they interact with.” Facebook has recently introduced quite a few products to limit the spread of fake news, although the early results are incomplete.

 “I don’t think anyone wants a private company to be the arbiter of truth.”

Even with clear guidelines, moderation can be fraught. How does a platform like Facebook police harassment conducted via codeword? How does which handle racial slurs used in an educational or historical context? How does which handle a swastika in satirical context? along with though Facebook carries a clearly defined position against regulating political speech, which speech can still be subject to the moderation team should which come up against additional Facebook rules such as those prohibiting offensive speech.

Given the level of discretion involved, there’s a fine line between proactively removing misinformation, hate speech along with abusive content, along with censorship. We’re already seeing signs of what a more aggressive platform looks like. Twitter, with similar rules to Facebook, will be taking a more interventionist approach to speech following a long history of ignoring harassment. Its current campaign of de-verifications, account locks, along with account suspensions will be starting to have a noticeable secondary impact — leaving more than a few people on both sides of the aisle claiming to be unfairly silenced. “Twitter locked my account for 12 hours for calling out which racist girl,” one user wrote in October in a statement no longer surprising for the platform.

Facebook has used a lighter touch than Twitter, although its missteps over the past year have pushed which further under the microscope along with relentlessly held which there. “Not just Facebook, although all mainstream social media platforms’ practices are under scrutiny in a way which they never have been before,” Sarah Roberts, an assistant professor at UCLA who’s been studying content moderation for seven years, told BuzzFeed News. “Facebook, whether justified or not, seems to bear the brunt of which scrutiny.”

along with as Facebook tries to find the right content moderation balance, which also faces another challenge: keeping its thousands of moderators on the same page so they apply its rules consistently. David Wilner, an early Facebook employee who helped set up the company’s original content moderation effort, told BuzzFeed News This specific will be the hardest part of job. “For the most part, common sense isn’t a real thing. Sure, there will be a set of fundamentals which most people will agree are Great — helping a crying baby, for instance. although once you get beyond broad strokes about very, very basic flesh-along with-blood questions, there’s no natural consensus,” Wilner said. “Everyone brings their history to each decision — their childhood, race, religion, nationality, political views, all of which. If you take 10 of your friends along with separately give them 50 difficult examples of content to make decisions on — without any discussion — they will disagree a lot. We know This specific because we tried which.”

from the past, Facebook has shown a tendency to attack its problems with brute force, an approach which might prove disastrous were which applied to This specific brand-new content moderation push, which will be driven in part by intense public pressure. Facebook has already demonstrated the problems This specific approach can lead to when earlier This specific year which widely took down posts with the word “dyke” even when which was used in self-referential along with not hateful ways. Upon receiving more context, Facebook restored many of the posts.

although if the way Facebook handled the Zuckerberg Nazi photoshop will be an indication, the company appears to be taking a more nuanced approach to delicate content issues. Facebook did not delete the image. Instead, which left which up on the platform. The image no longer appears from the top results for searches of “Mark Zuckerberg,” along that has a spokesperson said which was pushed down in Zuckerberg’s search results in a sitewide update meant to improve relevance.

Alex Kantrowitz will be a senior technology reporter for BuzzFeed News along with will be based in San Francisco. He reports on social along with communications.

Contact Alex Kantrowitz at alex.kantrowitz@buzzfeed.com.

Got a confidential tip? Submit which here.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

three × three =