Meta Oversight Board overturns decisions on removal of Israel-Gaza videos

0

Meta Oversight Board overturns decisions on removal of Israel-Gaza videos

People walk by a sign on the Meta campus In Menlo Park, California on October 28, 2022. Meta’s Oversight Board overruled Facebook’s automated system on two videos connected with the Israel-Hamas conflict on Tuesday. File Photo by Terry Schmitt/UPI | License Photo

Meta’s Oversight Board ruled on Tuesday that its automated tools unnecessarily removed two videos posted on its social media platforms related to the Israel-Hamas war in its first expedited review ruling.

The Oversight Board, an independent organization that reviews Meta’s content moderation decisions, overturned Meta’s initial decisions to remove the posts from its platforms but approved its later moves to return the posts to the platforms with a warning screen. Advertisement

The board said the first case involved a video posted to Facebook showing an Israeli woman begging her kidnappers not to kill her as she was taken hostage during the terrorist raids on Israel on Oct. 7. The second video posted to Instagram showed what appears to be the aftermath of a strike on or near Al-Shifa Hospital in Gaza City during Israel’s ground offensive where Palestinians, including children, were killed or injured. Advertisement

Meta said because of an “exceptional surge” in violent and graphic content since Oct. 7, it temporarily lowered the confidence thresholds for the automatic classification systems that identified content for its violence, hate speech, bullying and harassment.

As a result, the board said, Meta used its automated tools “more aggressively” to remove content that may violate those policies.

“While this reduced the likelihood that Meta would fail to remove violating content that might otherwise evade detection or where capacity for human review was limited, it also increased the likelihood of Meta mistakenly removing non-violating content related to the conflict,” the board said.

The Oversight Board said the Al-Shifa case specifically showed that “insufficient human oversight of automated moderation during crisis response” could lead to posts that may be of “significant public interest” being incorrectly removed.

“Both the initial decision to remove this content as well as the rejection of the user’s appeal were taken automatically based on a classifier score, without any human review,” the board said, noting the response may have been “exacerbated” by Meta’s decision to lower the removal threshold following the Oct. 7 attacks.

Additionally, the board said that both cases led Meta to demote the content from being recommended to other Facebook and Instagram users “even though the company had determined that the posts intended to raise awareness.” Advertisement

In the hostage video case, the board agreed that Meta should make its “default approach” to protect the safety and dignity of hostages by removing such videos, it was justified in its later decision to allow the content with a warning screen for the purposes of condemning the actions depicted, raising awareness and news reporting or calling for release.

“Indeed, given the fast-moving circumstances and the high costs to freedom of expression and access to information for removing this kind of content, Meta should have moved more quickly to adapt its policy,” the board said.

The board noted that Meta began allowing hostage-taking content to be shared on its platform on Oct. 20 from accounts on the company’s “cross-check lists” and was later expanded to all accounts until Nov. 16 for posts shared after that date.

It said, however, the practice highlighted concerns about the cross-check “including the unequal treatment of users, lack of transparent criteria for inclusion and the need to ensure greater representation of users whose content is likely to be important from a human-rights perspective on Meta’s cross-check lists.

“The use of the cross-check program in this way also contradicts how Meta has described and explained the purpose of the program, as a mistake prevention system and not a program that provides certain privileged users with more permissive rules,” the Oversight Board said. Advertisement

The case was the first expedited review taken by the board, and was completed in 12 days, less than half of the 30-day limit for a decision required by the expedited process.

Source

Leave A Reply

Your email address will not be published.