Tech

Facebook can’t detect hate speech in ads again | WGN Radio 720

San Francisco (AP) — Testing has never been so easy — and Facebook still failed.

Facebook and its parent company Meta have once again failed to test how well they can detect apparently violent hate speech in ads sent to the platform by the nonprofits Global Witness and Foxglove.

As she stated in the testimony of Congress in 2021, internal documents obtained by whistleblower Frances Haugen showed that Facebook’s ineffective moderation “literally instigated ethnic violence.” A message of hatred focused on Ethiopia. In March, Global Witness ran a similar test using hate speech in Myanmar, but Facebook couldn’t detect it either.

This group used inhumane hate speech to create 12 text-based ads calling for the killing of people belonging to each of Ethiopia’s three major ethnic groups (Amhara, Oromo, and Tigrayans). .. Facebook’s system has approved the publication of ads, similar to Myanmar ads. The ad isn’t actually published on Facebook.

However, this time, the group has notified Meta of any undetected violations. The company said the ad shouldn’t have been approved and pointed out the work it did to “build the ability to catch hateful and incendiary content in the most widely spoken languages, including Amharic.” did.

A week after being contacted by Meta, Global Witness submitted two more ads for approval, which was also a blatant malicious expression. Two ads written in Amharic, the most widely used language in Ethiopia, have been approved.

Meta did not respond to multiple messages asking for comment this week.

“We’ve picked out the worst possible cases,” said Rosie Sharp, Global Witness campaigner. “What Facebook should be the easiest to detect. They weren’t coded languages. They weren’t dog whistles. They weren’t humans of this type, or these It was a clear statement that type people should starve and die. “

Meta has consistently refused to say how many content moderators are in countries where English is not the first language. This includes moderators in Ethiopia, Myanmar, and other regions where materials posted on company platforms are related to real-world violence.

In November, Meta said it had removed a post by the Ethiopian Prime Minister urging citizens to launch and “fill” rival Tigre troops that threaten the country’s capital.

In a deleted post, Abby said, “We all have a duty to die for Ethiopia.” He called on the public to mobilize “with weapons and abilities.”

However, Abby continues to post on the platform and has 4.1 million followers. After the Prime Minister described the Tigre army as “gun” and “weed” in a comment in July 2021, the United States and others warned Ethiopia about “humanity-depriving rhetoric.”

“If an ad calling for genocide in Ethiopia repeatedly traverses Facebook’s net, there is only one possible conclusion, even after the issue has been reported on Facebook. It’s homeless,” said London-based. Rosa Curling, director of Foxglove, a legitimate non-profit organization, said. With Global Witness in that survey. “It’s clear that Facebook hasn’t learned that lesson years after the Myanmar genocide.”

Facebook can’t detect hate speech in ads again | WGN Radio 720

Source link Facebook can’t detect hate speech in ads again | WGN Radio 720

Related Articles

Back to top button