A former Meta engineer has accused the company of bias in its handling of Gaza-related content, alleging he was fired for addressing bugs that suppressed Palestinian Instagram posts. Ferras Hamad, a Palestinian-American who worked on Meta’s machine learning team, filed a lawsuit in California state court for discrimination and wrongful termination. Hamad claims Meta exhibited a pattern of bias against Palestinians, including deleting internal communications about the deaths of Palestinian relatives and investigating the use of the Palestinian flag emoji while not probing similar uses of the Israeli or Ukrainian flag emojis.
Hamad’s firing, he asserts, was linked to his efforts to fix issues that restricted Palestinian Instagram posts from appearing in searches and feeds, including a misclassified video by a Palestinian photojournalist.
Despite his manager confirming the task was part of his duties, Hamad was later investigated and fired, allegedly for violating a policy on working with accounts of people he knew personally, which he denies.
Meta’s annual shareholder meeting on Wednesday sparked online protests from human rights groups, calling for an end to what they describe as systemic censorship of pro-Palestinian content on the company’s platforms and within its workforce. Nearly 200 Meta employees have recently urged CEO Mark Zuckerberg to address alleged internal censorship and biases on public platforms, advocating for greater transparency and an immediate ceasefire in Gaza.
Activists argue that after years of pressing Meta and other platforms for fairer content moderation, shareholders might exert more influence on the company than public pressure alone. Nadim Nashif, founder of the social media watchdog group 7amleh, highlighted that despite a decade of advocacy, the situation has deteriorated, necessitating new strategies like shareholder engagement to spur change.
Recently this month, a public statement from Meta employees followed an internal petition in 2023 with over 450 signatures, whose author faced an investigation by HR for allegedly violating company rules. The latest letter condemns Meta’s actions as creating a ‘hostile and unsafe work environment’ for Palestinian, Arab, Muslim, and ‘anti-genocide’ colleagues, with many employees claiming censorship and dismissiveness from leadership.
During the shareholder meeting, Meta focused on its AI projects and managing disinformation, sidestepping the issue of Palestinian content moderation. Despite external audit findings and a letter from US Senator Elizabeth Warren criticising Meta’s handling of pro-Palestinian content, the company did not immediately address the circulating letters and petitions.
The EU enforcers responsible for overseeing the Digital Services Act (DSA) are intensifying their scrutiny of disinformation campaigns on X, formerly known as Twitter and owned by Elon Musk, in the aftermath of the recent shooting of Slovakia’s prime minister, Robert Fico. X has been under formal investigation since December for disseminating disinformation and the efficacy of its content moderation tools, particularly its ‘Community Notes’ feature. Despite ongoing investigations, no penalties have been imposed thus far.
Elon Musk’s personal involvement in amplifying a post by right-wing influencer Ian Miles Cheong linking the shooting to Robert Fico’s purported rejection of the World Health Organization’s pandemic prevention plan has drawn further attention to X’s role in spreading potentially harmful narratives. In response to inquiries during a press briefing, EU officials confirmed they are closely monitoring content on the platform to assess the effectiveness of X’s measures in combating disinformation.
In addition to disinformation concerns, X’s introduction of its generative AI chatbot, Grok, in the EU has raised regulatory eyebrows. Grok, known for its politically incorrect responses, has been delayed in certain aspects until after the upcoming European Parliament elections due to perceived risks to civic discourse and election integrity. The EU is in close communication with X regarding the rollout of Grok, indicating the regulatory scrutiny surrounding emerging AI technologies and their potential impact on online discourse and democratic processes.
Every year, the Met Gala courts controversy, but in 2024, a TikTok audio track ignited outrage before the event even began. Influencer Haley Kalil faced backlash for using a snippet from the film ‘Marie Antoinette,’ featuring the infamous line, ‘Let them eat cake,’ as she showcased her lavish attire. Critics likened the spectacle to ‘The Hunger Games,’ criticising the disconnect between opulence and global suffering, particularly in light of ongoing conflicts like the Gaza crisis.
Social media platforms have become battlegrounds for shaping public discourse, especially concerning the Israel-Palestine conflict. With audiences primarily exposed to the issue through digital channels, platforms like Instagram and TikTok have become outlets for frustration and activism. Simultaneously, a grassroots movement known as ‘Blockout 2024’ emerged, urging users to block celebrities to diminish their influence and redirect attention to pressing global issues.
The Blockout movement has gained momentum, with thousands participating in calls to action on social media. Alongside blocking celebrities, efforts to provide direct aid to Gaza have intensified. Influencers are facing pressure to promote fundraising initiatives like Operation Olive Branch, highlighting the role of social media in mobilising support for humanitarian causes amidst geopolitical conflicts.
While the impact of social media activism remains uncertain, the Blockout movement seems to be a shift towards digital solidarity and accountability. As the conflict unfolds through short-form videos and Instagram posts, the conversation sparked by events like the Met Gala indicates a growing intersection between celebrity culture, social media activism, and global crises.
Malaysia’s communications minister has criticised Meta Platforms for removing Facebook posts by local media covering Prime Minister Anwar Ibrahim’s meeting with a Hamas leader in Qatar. Anwar clarified that while he has diplomatic relations with Hamas’s political leadership, he is not involved in its military activities.
Expressing Malaysia’s support for the Palestinian cause, the government has asked Meta to explain the removal of posts by two media outlets about Anwar’s meeting. Additionally, a Facebook account covering Palestinian issues was closed.
Communications Minister Fahmi Fadzil condemned Meta’s actions, noting the posts’ relevance to the prime minister’s official visit to Qatar. He emphasised concerns about Meta’s disregard for media freedom.
Last October, Fahmi warned of potential actions against Meta and other social media platforms if they obstructed pro-Palestinian content since Malaysia consistently advocates for a two-state solution to the Israel-Palestine conflict.
Alphabet’s YouTube announced its compliance with a court decision to block access to 32 video links in Hong Kong, marking a move critics argue infringes on the city’s freedoms amid tightening security measures. The decision followed a government application granted by Hong Kong’s Court of Appeal, targeting a protest anthem named ‘Glory to Hong Kong,’ with judges cautioning against its potential use by dissidents to incite secession.
Expressing disappointment, YouTube stated it would abide by the removal order while highlighting concerns regarding the chilling effect on online free expression. Observers, including the US government, voiced worries over the ban’s impact on Hong Kong’s reputation as a financial hub committed to the free flow of information.
Industry groups emphasised the importance of maintaining a free and open internet in Hong Kong, citing its significance in preserving the city’s competitive edge. The move reflects broader trends of tech companies complying with legal requirements, with Google parent Alphabet having previously restricted content in China.
Why does it matter?
Despite YouTube’s action, tensions persist over the erosion of freedoms in Hong Kong, underscored by ongoing international scrutiny and criticism of the city’s security crackdown on dissent. As the city grapples with balancing national security concerns and its promised autonomy under the ‘one country, two systems’ framework, the implications for its future as a global business centre remain uncertain.
A group of TikTok creators has taken legal action against the US federal government over a law signed by President Joe Biden. The law would either require the divestiture of the popular short video app or potentially ban it altogether. TikTok creators argue that the app has become integral to American life, with 170 million users nationwide.
Among those suing are individuals from diverse backgrounds and professions, including a Marine Corps veteran, a woman selling cookies, a college coach, a hip-hop artist, and an advocate for sexual assault survivors. Despite their differences, they all believe TikTok provides a unique platform for self-expression and community-building.
The lawsuit, filed by Davis Wright Tremaine LLP on behalf of the creators, alleges that the law infringes on free speech rights and threatens to eliminate an important communication medium. The White House has refrained from commenting on the matter, while the US Department of Justice asserts that the law addresses national security concerns while remaining within constitutional boundaries.
Why does it matter?
The ongoing legal battle echoes past disputes involving TikTok, including a similar lawsuit filed by the company and its parent company, ByteDance. Courts have previously intervened to block attempts to ban the app, citing concerns about free speech and constitutional rights.
The Delhi High Court has directed Google and Microsoft to file a review petition seeking the recall of a previous order mandating search engines to promptly restrict access to non-consensual intimate images (NCII) without necessitating victims to provide specific URLs repeatedly. Both tech giants argued the technological infeasibility of identifying and proactively taking down NCII images, even with the assistance of AI tools.
The court’s order stems from a 2023 ruling requiring search engines to remove NCII within 24 hours, as per the IT Rules, 2021, or risk losing their safe harbour protections under Section 79 of the IT Act, 2000. It proposed issuing a unique token upon initial takedown, with search engines responsible for turning off any resurfaced content using pre-existing technology to alleviate the burden on victims of tracking and repeatedly reporting specific URLs. Moreover, the court suggested leveraging hash-matching technology and developing a ‘trusted third-party encrypted platform’ for victims to register NCII content or URLs, shifting the responsibility of identifying and removing resurfaced content away from victims and onto the platform while ensuring utmost transparency and accountability standards.
However, Google expressed concerns regarding automated tools’ inability to discern consent in shared sexual content, potentially leading to unintended takedowns and infringing on free speech, echoing Microsoft’s apprehension about the implications of proactive monitoring on privacy and freedom of expression.
An Australian court has denied the cyber safety regulator’s attempt to extend an order for Elon Musk’s X to block videos depicting the stabbing of an Assyrian church bishop, labelled as a terrorist attack. The Federal Court judge, Geoffrey Kennett, rejected the bid to prolong the injunction, with reasons for the decision to be disclosed later.
The legal clash has fueled tensions between Musk and senior figures in Australia, including Prime Minister Anthony Albanese, who criticised Musk as ‘an arrogant billionaire’ for resisting the video’s takedown. Musk responded by posting memes, condemning the regulatory order as censorship. While other platforms like Meta swiftly removed the content upon request, X has been persistent in its refusal to remove the posts globally, arguing against one country’s rules dictating internet content.
Last month, the Federal Court upheld the eSafety Commissioner’s order for X to remove 65 posts containing the violent footage of the bishop’s stabbing during a sermon in Sydney. The incident, for which a 16-year-old boy has been charged with a terrorism offence, prompted Australia to block local access to the posts. However, the regulator contested X’s proposal to geo-block Australians, claiming it was ineffective due to the widespread use of virtual private networks to conceal users’ locations.
In response to the rising concerns over social media influence, Albanese’s government has announced plans for a parliamentary inquiry to investigate the adverse effects of online platforms. The inquiry aims to address the control social media exerts over Australians’ online content consumption, highlighting a lack of oversight.
Australia is taking stringent measures by announcing a parliamentary inquiry into the impact of social media platforms. The legal step is a response to the growing concerns over their influence on public discourse and the alarming spread of harmful content. Prime Minister Anthony Albanese, in his address, underscored the need for greater scrutiny, acknowledging that while social media can be a force for good, it also wields an impactful negative influence, particularly on issues as grave as domestic violence and radicalisation.
The government’s move comes amid criticism of platforms like Meta’s Facebook, ByteDance’s TikTok, and Elon Musk’s X for handling violent posts and content moderation. X, in particular, is embroiled in a legal dispute with the Australian government over its refusal to globally remove videos of a recent stabbing attack on an Assyrian church bishop in Sydney. The government argues for broader content removal, while Musk has characterised the decision as censorship.
The inquiry will also examine Meta’s decision to stop paying for news content in Australia, reflecting broader concerns about the role of social media in shaping public discourse and its impact on traditional media. Communications Minister Michelle Rowland stressed the importance of understanding how social media companies regulate content and called for greater accountability in their decision-making processes.
As Parliament gears up for the inquiry, the terms and scope are still being determined. The aim is to scrutinise the practices of social media companies and make recommendations for accountability measures. The inquiry may involve summoning individuals to testify, a move that underscores the government’s commitment to addressing concerns surrounding social media regulation and content moderation. The outcomes of this inquiry will be crucial in shaping the future of social media regulation, making it a process of utmost relevance and impact.