In light of the escalating conflict between Israel and Hamas, X, formerly known as Twitter, is initiating measures to quell a torrent of posts rife with graphic depictions, violent speech, and hateful conduct. X pledges its highest level of response to the crisis, amidst claims of incessant misinformation by watchdog groups and the European Union’s digital policy chief. It is pertinent to note that the social media giant was acquired last year by billionaire Elon Musk.
Manipulated imagery and blatant forgeries have found a thriving niche within X. As European Commissioner Thierry Breton articulated in a recent letter to Musk, a burgeoning collection of images evidently repurposed from unrelated military conflicts or video games are being falsely attributed to the ongoing turmoil.
Breton also underscored the presence of “potentially illegal content” detected by authorities that could infringe EU laws, thus urging Musk to be “timely, diligent and objective” in addressing the issue. Whilst X has yet to issue a formal response to Breton’s correspondence, a recent post issued by its safety team acknowledges a surge in posts pertaining to the conflict across the platform.
Representatives from X also confirmed a continued dedication to a policy championed by Musk. This policy requires users’ assistance in rating potential misinformation, inhibiting its disappearance from the platform but attaching necessary context.
Into this melee swims Musk, who recently posted two accounts he touted as reliable sources for live updates. His choices, however, have been criticized as enabling disinformation by prominent voices in the field, including Atlantic Council analyst Emerson Brooking. Notably, both accounts were previously implicated in disseminating a fabricated AI-generated image of an explosion at the Pentagon.
Brooking held Musk personally accountable for the increased difficulty in discerning verity, considering Musk’s abolishment of the blue check verification system and the implementation of a structure that ironically rewards the spread of unverified, sensationalist information.
“Musk has animated a tidal wave of tragic disinformation,” Brooking commented. He also contended that Musk’s attempts to dismiss the role of objective media has fundamentally undercut the efforts of empirical reporting.
Musk’s sweeping changes since acquiring Twitter have included major reshuffling of staff—primarily those tasked with moderating potentially harmful content. Experts lament that the consequent reduction in manpower significantly hampers X’s ability to moderate content effectively.
In a recent step towards greater transparency and user autonomy, X has revised a policy to allow users to filter sensitive media. This reform does not entail the removal of such posts, but allows the public to filter their content, a decision that, according to X, upholds the public interest of real-time events comprehension.
X is also dedicating resources to remove newly created accounts linked to Hamas and partnering with other tech companies in efforts to thwart the online distribution of “terrorist content”. Hundreds of accounts attempting to manipulate trending topics have also been eliminated.
Linda Yaccarino, the executive appointed by Musk as the helm of X, has prioritized the platform’s safety by withdrawing from a tech conference to focus solely on managing the platform amidst the ongoing conflict.