Experts warn social media unprepared for 2024 elections amid language barriers

Experts highlight the critical need for ongoing content moderation across diverse languages and cultures to combat disinformation and hate, emphasizing that this effort should not be limited to election periods.

Concept - Computer Keyboard with red key that says FAKE NEWS, online dangers

Experts are raising concerns about the readiness of major social media companies, including Google, TikTok, Meta, and X, to tackle misinformation during the 2024 elections taking place worldwide. Language barriers are cited as a significant challenge, with the Global Coalition for Tech Justice urging these tech giants to establish comprehensive election action plans to protect democracy and user safety. Experts warn that Failure to do so could threaten the integrity of elections and the protection of citizens.

They emphasised the need for robust content moderation across various languages and cultural contexts, with calls for increased investment in human moderation. Additionally, experts stress that combating disinformation and hate should be an ongoing effort, not limited to election periods, and should include measures to limit the spread of such content. Social media companies, including Meta, are also facing scrutiny for handling hate speech in countries like India. These concerns have prompted calls for action and transparency from the tech industry.

Why does it matter?

Language barriers add a layer of complexity to content moderation, especially in diverse cultural settings. Significantly, social media firms have long favoured English content in their automated filtering, neglecting content moderation in thousands of other languages. This imbalance may have played a role in exacerbating severe instances of violence, including the Myanmar genocide and the Jan. 6 attack in Brazil. Addressing this issue is a matter of linguistic diversity and a fundamental responsibility to safeguard the democratic process.