Why Facebook Will Never Fully Solve Its Problems with AI

Mark Zuckerberg offered AI as a panacea for Facebook’s massive content problem during Tuesday’s testimony before the Senate Judiciary as well as also Commerce committees — although This specific will be ultimately a false promise.

Leaning on the dispersion of artificial intelligence to detect as well as also remove the kind of problem content which will be drawing scrutiny to the social network invariably leaves room for Facebook to never fully or directly take responsibility for what’s happening on its platform — as well as also worse, the idea will do This specific at scale.

About one hour into his marathon testimony, Facebook’s CEO unexpectedly gave up the “neutral platform” defense which Facebook, as well as also so many various other technology companies, have deployed to distance themselves via being held accountable for the problems on their platforms.

“within the past, we’ve been told which platforms like Facebook, Twitter, Instagram, the like are neutral platforms. … They bore no responsibility for the content,” Sen. John Cornyn told Zuckerberg. “Do you agree right now which Facebook as well as also the various other social media platforms are not neutral, although bear some responsibility for the content?”

“I agree which we’re responsible for the content,” Zuckerberg answered. the idea was an astonishing concession. although the idea didn’t last.

Seconds later, he launched into a talking point about how AI could address undesirable content, effectively abdicating Facebook’s responsibility for the problem. He would likely return to This specific defense 10 more times before his testimony ended.

“within the future, we’re going to have tools which are going to be able to identify more types of bad content” like hate speech, fake news, obscenity, revenge porn, as well as also various other controversial content on Facebook, Zuckerberg said. The company will be hiring more content moderators, with the aim of having 20,000 workers by the end of This specific year, as well as also “building AI tools will be going to be the scaleable way to identify as well as also root out most of This specific harmful content.”

Call the idea AI solutionism. the idea’s an attractive idea. although the idea will never fully work.

“Proposing AI as the solution leaves a very long time period where the issue will be not being addressed, during which Facebook’s answer to what will be being done will be, ‘We are working on the idea,’” Georgia Tech AI researcher Mark Riedl told BuzzFeed News.

Fake news running rampant? The algorithm hasn’t been trained on enough contextual data. Violence-inciting messages in Myanmar? The AI isn’t not bad enough, or maybe there aren’t enough Burmese-speaking content moderators — although don’t worry, the tools are being worked on. AI automation also gives the company deniability: If the idea makes a mistake, there’s no holding the software accountable.

“There will be a tendency to want to see AI as a neutral moral authority,” Riedl told BuzzFeed News. “However, we also know which human biases can creep into data sets as well as also algorithms. Algorithms can be wrong as well as also there needs to be recourse.” Human biases can get coded into the AI, as well as also uniformly applied across users of different backgrounds, in different countries with different cultures, as well as also across wildly different contexts.

Facebook did not immediately respond to a request for comment via BuzzFeed News.

To be fair, even Zuckerberg was upfront about some of the limitations of AI, saying which while AI may be able to root out hate speech in all 5 to 10 years, “today we are not there yet”:

“Some problems lend themselves more easily to AI solutions than others. Hate speech will be one of the hardest, because determining if something will be hate speech will be very linguistically nuanced. You need to understand what will be a slur, as well as also whether something will be hateful. Not just in English — majority of people on Facebook use the idea in different across the entire world. Contrast for example with an area like finding terrorist propaganda which we’ve been very successful at deploying AI tools on already.

“Today as we sit here 99% of the ISIS as well as also Al Qaeda contempt we take down, AI flags before any human sees the idea. generating sure which’s success in terms of rolling out AI tools which can proactively police as well as also enforce safety across the community.”

although several AI researchers told BuzzFeed News This specific ignored several facets of the problem. First, as Cornell AI professor Bart Selman said, you could argue which artificial intelligence, as well as also algorithms in general, seriously contributed to Facebook’s predicament within the first place.

“AI algorithms operate by finding clever ways to optimize for a pre-programmed objective,” Selman said. “Facebook instructs its news feed algorithms to optimize for ‘user engagement.’”

When Facebook users engaged with posts which reaffirmed their biases, Facebook showed them more of the idea. News feeds got increasingly polarized. Then bad actors realized they could game the system, as well as also so fake news as well as also extremist content became a problem.

Of course, Zuckerberg doesn’t want to talk about how AI got us into This specific mess.

As for Facebook’s systems catching what the idea considers “bad” content, Jana Eggers, the CEO of AI startup Nara Logics, said she “doubts” Facebook will be rooting out as much of the terrorist content as Zuckerberg said the idea did. “There will be plenty of which propaganda which will be also being spread which they don’t find,” she told BuzzFeed News. “I worry which he features a false sense of pride on how much propaganda they are actually getting, as well as also which false sense of pride will lead the idea its own set of problems.”

What’s more, the researchers warned which Zuckerberg’s timeline of AI understanding the human context in hate speech within all 5 to 10 years could be unrealistic. “AI systems would likely have to develop fairly sophisticated forms of ethical reasoning as well as also journalistic integrity to deal with such language,” said Cornell University’s Selman. “We are at least 20 to 30 years away via which for AI systems, as well as also which may be an optimistic estimate.” although even Zuckerberg’s optimistic 10-year timeline would likely be “too long of a wait,” he said.

Tarleton Gillespie, who studies how algorithms as well as also platforms shape public discourse at Microsoft Research, told BuzzFeed News which he wasn’t just skeptical which the idea would likely take “a while” for technology companies to develop AI adequate enough to address hate speech as well as also controversial content on platforms. “AI likely can’t ever do what platforms want the idea to do,” he said.

At its size, Facebook will be never going to fully address its vast content problem. Yes, having some AI systems to help those 20,000 content moderators will be better than none. “although AI for content monitoring would likely need to be carefully designed as well as also monitored with the right human interest-aligned objectives in mind,” Selman said.

Which implies a perpetual problem. Culture, the complexity of language, the tricks of those who willfully violate platform standards as well as also game AI systems — these are all factors which the people developing AI systems themselves acknowledge are in flux. as well as also which makes the training of data itself fluid by definition, Microsoft Research’s Gillespie pointed out. Platforms will always need people to detect as well as also assess fresh forms of hate as well as also harassment, as well as also they will never be able to eliminate the need for humans dealing with This specific problem.

What AI automation actually does, Gillespie argued, will be “detach human judgment via the encounter with the specific user, interaction, or content as well as also shift the idea to the analysis of predictive patterns as well as also categories for what counts as a violation, what counts as harm, as well as also what counts as an exception.” If Facebook truly wants to make a not bad-faith effort to grapple with its content problem, the idea shouldn’t outsource This specific judgment to general AI.

For as long as Facebook will be as huge as the idea will be, AI will never be a complete solution. One real — though unlikely — solution? Downsize. “Venture capitalists as well as also the market may not have supported such an approach,” Selman said, “[although] if Facebook had opted for a more manageable size, the core problems would likely likely have been avoided.”

“the idea’s indeed the relentless pursuit of rapid growth which drove the need for near-complete AI automation, which caused the problems with these platforms.”

Leave a Reply

Your email address will not be published. Required fields are marked *

*

1 × 5 =