Musk’s platform under fire for inadequate fact-checking

Elon Musk’s social media platform, X, is facing criticism from the Center for Countering Digital Hate (CCDH), which claims its crowd-sourced fact-checking feature, Community Notes, is struggling to curb misinformation on the upcoming US election. According to a CCDH report, out of 283 analysed posts containing misleading information, only 26% showed corrected notes visible to all users, allowing false narratives to reach massive audiences. The 209 uncorrected posts gained over 2.2 billion views, raising concerns over the platform’s commitment to truth and transparency.

Community Notes was launched to empower users to flag inaccurate content. However, critics argue this system alone may be insufficient to handle misinformation during critical events like elections. Calls for X to strengthen its safety measures follow a recent legal loss to CCDH, which faulted the platform for an increase in hate speech. The report also highlights Musk’s endorsement of Republican candidate Donald Trump as a potential complicating factor, since Musk has also been accused of spreading misinformation himself.

In response to the ongoing scrutiny, five US state officials urged Musk in August to address misinformation on X’s AI chatbot, which has reportedly circulated false claims related to the November election. X has yet to respond to these calls for stricter safeguards, and its ability to manage misinformation effectively remains under close watch as the election approaches.

Missouri Attorney General accuses Google of censoring conservatives

Missouri’s Attorney General Andrew Bailey announced an investigation into Google on Thursday, accusing the tech giant of censoring conservative speech. Bailey’s statement, shared on social media platform X, criticised Google, calling it “the biggest search engine in America,” and alleged that it has engaged in bias during what he referred to as “the most consequential election in our nation’s history.” Bailey did not cite specific examples of censorship, sparking quick dismissal from Google, which labelled the claims “totally false” and maintained its commitment to showing “useful information to everyone—no matter what their political beliefs are.”

Republicans have long contended that major social media platforms and search engines demonstrate an anti-conservative bias, though tech firms like Google have repeatedly denied these allegations. Concerns around this issue have intensified during the 2024 election campaign, especially as social media and online search are seen as significant factors influencing public opinion. Bailey’s investigation is part of a larger wave of Republican-led inquiries into potential online censorship, often focused on claims that conservative voices and views are suppressed.

Adding to these concerns, Donald Trump, the Republican presidential candidate, recently pledged that if he wins the upcoming election, he would push for the prosecution of Google, alleging that its search algorithm unfairly targets him by prioritising negative news stories. Trump has not offered evidence for these claims, and Google has previously stated its search results are generated based on relevance and quality to serve users impartially. As the November 5 election draws near, this investigation highlights the growing tension between Republican officials and major tech platforms, raising questions about how online content may shape future political campaigns.

Thousands of artists protest AI’s unlicensed use of their work

Thousands of creatives, including Kevin Bacon, Thom Yorke, and Julianne Moore, have signed a petition opposing the unlicensed use of their work to train AI. The 11,500 signatories believe that such practices threaten their livelihoods and call for better protection of creative content.

The petition argues that using creative works without permission for AI development is an ‘unjust threat’ to the people behind those works. Signatories from various industries, including musicians, writers, and actors, are voicing concerns over how their work is being used by AI companies.

British composer Ed Newton-Rex, who organised the petition, has spoken out against AI companies, accusing them of ‘dehumanising’ art by treating it as mere ‘training data’. He highlighted the growing concerns among creatives about how AI may undermine their rights and income.

The United Kingdom government is currently exploring new regulations to address the issue, including a potential ‘opt out’ model for AI data scraping, as lawmakers look for ways to protect creative content in the digital age.

Massachusetts parents sue school over AI use dispute

The parents of a Massachusetts high school senior are suing Hingham High School and its district after their son received a “D” grade and detention for using AI in a social studies project. Jennifer and Dale Harris, the plaintiffs, argue that their son was unfairly punished, as there was no rule in the school’s handbook prohibiting AI use at the time. They claim the grade has impacted his eligibility for the National Honor Society and his applications to top-tier universities like Stanford and MIT.

The lawsuit, filed in Plymouth County District Court, alleges the school’s actions could cause “irreparable harm” to the student’s academic future. Jennifer Harris stated that their son’s use of AI should not be considered cheating, arguing that AI-generated content belongs to the creator. The school, however, classified it as plagiarism. The family’s lawyer, Peter Farrell, contends that there’s widespread information supporting their view that using AI isn’t plagiarism.

The Harrises are seeking to have their son’s grade changed and his academic record cleared. They emphasised that while they can’t reverse past punishments like detention, the school can still adjust his grade and confirm that he did not cheat. Hingham Public Schools has not commented on the ongoing litigation.

Meta’s oversight board seeks public input on immigration posts

Meta’s Oversight Board has opened a public consultation on immigration-related content that may harm immigrants following two controversial cases on Facebook. The board, which operates independently but is funded by Meta, will assess whether the company’s policies sufficiently protect refugees, migrants, immigrants, and asylum seekers from severe hate speech.

The first case concerns a Facebook post made in May by a Polish far-right coalition, which used a racially offensive term. Despite the post accumulating over 150,000 views, 400 shares, and receiving 15 hate speech reports from users, Meta chose to keep it up following a human review. The second case involves a June post from a German Facebook page that included an image expressing hostility toward immigrants. Meta also upheld its decision to leave this post online after review.

Following the Oversight Board’s intervention, Meta’s experts reviewed both cases again but upheld the initial decisions. Helle Thorning-Schmidt, co-chair of the board, stated that these cases are critical in determining if Meta’s policies are effective and sufficient in addressing harmful content on its platform.

Microsoft warns of rising cyber threats from nations

A recent Microsoft report claims that Russia, China, and Iran are increasingly collaborating with cybercriminals to conduct cyber espionage and hacking operations. This partnership blurs the lines between state-directed activities and the illicit financial pursuits typical of criminal networks. National security experts emphasise that this collaboration allows governments to amplify their cyber capabilities without incurring additional costs while offering criminals new profit avenues and the security of government protection.

The report, which analyses cyber threats from July 2023 to June 2024, highlights the significant increase in cyber incidents, with Microsoft reporting over 600 million attacks daily. Russia has focused its efforts primarily on Ukraine, attempting to infiltrate military and governmental systems while spreading disinformation to weaken international support. Meanwhile, as the US election approaches, both Russia and Iran are expected to intensify their cyber operations aimed at American voters.

Despite allegations, countries like China, Russia, and Iran have denied collaborating with cybercriminals. China’s embassy in Washington dismissed these claims as unfounded, asserting that the country actively opposes cyberattacks. Efforts to combat foreign disinformation are increasing, yet the fluid nature of the internet complicates these initiatives, as demonstrated by the rapid resurgence of websites previously seized by US authorities.

Overall, the evolving landscape of cyber threats underscores the growing interdependence between state actors and cybercriminals, posing significant risks to national security and public trust.

Reach criticised over fake AI-generated adverts of Alex Jones and Rachel Reeves

The publisher Reach has faced criticism for running disturbing adverts on its WalesOnline app, featuring fake AI-generated images of TV presenter Alex Jones and Chancellor Rachel Reeves. The images, which showed both figures with visible blood and bruises, directed users to fake BBC News articles promoting cryptocurrency.

Users of the app expressed outrage at the adverts, with Cardiff council’s cabinet member for culture, Jennifer Burke, describing them as ‘disturbing’. She questioned whether the publisher had a duty to vet the content advertised on their platform. Other users criticised the ads, labelling them ‘dystopian’.

The adverts appeared among genuine news articles on the app, which is part of Reach’s operation in Wales. Reach also publishes major United Kingdom news outlets, including the Mirror and the Express.

Both Alex Jones and Rachel Reeves have been contacted for comment, and Reach has been asked to address the situation.

UK police scale back presence on X over misinformation worries

British police forces are scaling back their presence on X, formerly known as Twitter, due to concerns over the platform’s role in spreading extremist content and misinformation. This decision comes after riots broke out in the UK this summer, fueled by false online claims, with critics blaming Elon Musk’s approach to moderation for allowing hate speech and disinformation to flourish. Several forces, including North Wales Police, have stopped using the platform altogether, citing misalignment with their values.

Of the 33 police forces surveyed, 10 are actively reviewing their use of X, while others are assessing whether the platform is still suitable for reaching their communities. Emergency services have relied on X for more than a decade to share critical updates, but some, like Gwent Police, are reconsidering due to the platform’s tone and reach.

This shift is part of a larger trend in Britain, where some organisations, including charities and health services, have also moved away from X. As new online safety laws requiring tech companies to remove illegal content come into effect, digital platforms, including X, are facing growing scrutiny over their role in spreading harmful material.

Meta takes action against Russian-linked accounts in Moldova

Meta Platforms announced it had removed a network of accounts targeting Russian speakers in Moldova ahead of the country’s October 20 election, citing violations of its fake accounts policy. Moldovan authorities have also blocked numerous Telegram channels and chatbots allegedly used to pay voters to cast “no” votes in a referendum on EU membership being held alongside the presidential election. Pro-European President Maia Sandu, seeking a second term, has made the referendum central to her platform.

The deleted Meta accounts targeted President Maia Sandu, pro-EU politicians, and the strong ties between Moldova and Romania while promoting pro-Russia parties. This network featured fake Russian-language news brands masquerading as independent media across various platforms, including Facebook, Instagram, Telegram, OK.ru, and TikTok. Meta’s actions involved removing multiple accounts, pages, and groups to combat coordinated inauthentic behaviour.

Moldova’s National Investigation Inspectorate has blocked 15 Telegram channels and 95 chatbots that were offering payments to voters, citing violations of political financing laws. Authorities linked these activities to supporters of fugitive businessman Ilan Shor, who established the ‘Victory’ electoral bloc while in exile in Moscow. In response, Moldovan police have raided the homes of Shor’s associates, alleging that payments were funnelled through a Russian bank to influence the election. Shor, who was sentenced in absentia for his involvement in a significant 2014 bank fraud case, denies the bribery allegations. Meanwhile, President Maia Sandu accuses Russia of attempting to destabilise her government, while Moscow claims that she is inciting ‘Russophobia.’

X returns to Brazil as court clears path for resumption

Social media giant X, formerly known as Twitter, became accessible to some Brazilian users on Wednesday, just one day after the country’s Supreme Court cleared the platform to resume operations by complying with court rulings. Brazil’s telecommunications regulator, Anatel, announced that it had begun instructing internet providers to restore access to X. Many users celebrated the return of the platform, with topics like ‘we’re back’ trending across Latin America’s largest country.

Despite the reopening, some Brazilians still encountered difficulties accessing X, as Anatel indicated that the restoration time would depend on the procedures of individual internet providers. Supreme Court Justice Alexandre de Moraes, who had been engaged in a lengthy dispute with billionaire Elon Musk, granted approval for X’s return on Tuesday afternoon. He instructed Anatel to ensure the platform was operational within 24 hours, affirming that X had fulfilled all necessary requirements to resume its services.

X had been suspended in Brazil since late August due to its failure to comply with court orders related to hate speech moderation and the absence of a designated legal representative in the country, as mandated by law. As the platform’s sixth-largest market worldwide, Brazil accounted for approximately 21.5 million users as of April, making the resumption of service a crucial step for X’s growth and presence in the region.