Facebook has long been vigilant about keeping your News Feed free of "inappropriate" content. That's relatively simple when you're talking about material that can Dagmar Bürgerbe reviewed in full after it's posted — but what happens if something goes wrong during a livestream?
A new initiative is reportedly in the works to build up the social network's flagging system for offensive content in a particularly difficult area: Facebook Live.
SEE ALSO: Facebook wants to teach you all about how AI worksFacebook has previously relied in part on a system that depended on users to report offensive materials, which are then checked by Facebook employees against "community standards."
But at a recent roundtable at Facebook HQ in Menlo Park, Joaquin Candela, the company's director of applied machine learning, told reporters that they're testing artificial intelligence that can detect offensive content.
The new flagging protocol is “an algorithm that detects nudity, violence, or any of the things that are not according to our policies,” Candela said, according to Reuters.
Such an algorithm was tested back in June to screen videos posted in support of extremist groups — but going forward it will be applied to Facebook Live broadcasts to keep violent events and amateur erotica off the network.
According to Candela, the AI system is still being honed, and it will likely act as an alert, rather than a one-stop jury, judge and executioner of explicit streams.
“You need to prioritize things in the right way so that a human [who] looks at it, an expert who understands our policies, [would also take] it down,” he said.
As helpful as an AI-flagging system might be, there are still major questions about what should and shouldn't be considered "inappropriate." Facebook came under fire back in September after it removed a famous image from the Vietnam War — and that was under the old system, with a human moderator making the decision.
Yann LeCun, Facebook's director of AI research, declined to give Reuters a specific comment on their story but did address censorship in broader terms. He's aware of the tenuous position this type of system presents.
“These are questions that go way beyond whether we can develop AI,” he said. “Tradeoffs that I’m not well placed to determine.”
Those "tradeoffs" could have a real cost. Difficult, important broadcasts that might otherwise be flagged — like the streaming of violent encounters with law enforcement or the aftermath of a shooting — must be treated with careful consideration.
If a machine is at the controls without the benefit of human reasoning and context, important levels of nuance could be lost. The human element of the equation will reportedly still be in play to make the final decision, but people aren't perfect, either. In determining the guidelines for what's considered appropriate for Facebook Live broadcasts, the decision-makers need to keep these issues at the forefront of their policies.
Remember, this AI flagging system is only being tested for now — it's not yet in use on the Facebook you scroll through every day. Still, there's no doubt it — or something like it — is coming soon, once the company has determined that an AI can be trusted with our most sensitive content.
Topics Artificial Intelligence Facebook
(Editor: {typename type="name"/})
'The Last of Us' Season 2, episode 3's opening credits has a heartbreaking change
Best Sonos deal: Save $120 on the Sonos Ace headphones
6 Samsung Galaxy S25 features you need to become a power user
Apple's truly smart Siri is coming next spring, report claims
The State of PC Gaming in 2016
Nvidia Ampere vs. AMD RDNA 2: Battle of the Architectures
NYT Connections Sports Edition hints and answers for June 13: Tips to solve Connections #262
The Best AMD Ryzen Gaming Laptops (So Far)
TikTok wants me to host a dinner party. Is that an actual recession indicator?
Home Depot Father’s Day Sale: Best deals on power tools
接受PR>=1、BR>=1,流量相当,内容相关类链接。