Turkey blocks Roblox amid child protection concerns

Türkiye has blocked access to the popular kids’ gaming platform Roblox due to concerns over content that could potentially lead to child abuse. Justice Minister Yilmaz Tunc announced the decision on X, citing a legal ruling based on a law regulating internet broadcasting. He emphasised the state’s constitutional duty to protect children and condemned the harmful use of technology.

The ban on Roblox is the latest in a series of measures targeting internet platforms in Türkiye. Recently, Instagram faced similar restrictions after a senior aide to President Recep Tayyip Erdogan accused the social media platform of censoring posts related to the death of Hamas political leader Ismail Haniyeh.

Roblox has not yet responded to the request for comment regarding the ban. The company has been grappling with challenges related to keeping inappropriate content off its platform, including issues involving paedophiles.

The development highlights the ongoing tension between the Turkish government and digital platforms as authorities enforce stringent measures to control online content and protect vulnerable users.

Elon Musk under fire as social media giant X implicated in fuelling UK riots

Elon Musk is under fire for his social media posts, which many believe have exacerbated the ongoing riots in Britain. Musk, known for his provocative online presence, has shared riot footage on his platform, X, and made controversial remarks, including predicting a ‘civil war’ and criticising Prime Minister Keir Starmer and the British government for prioritising speech policing over community safety.

The unrest began after a stabbing at a Taylor Swift-themed dance class in Southport, England, resulted in the deaths of three young girls. Allegedly, false information spread online suggested the attacker was an illegal Muslim immigrant. However, the suspect, Axel Rudakubana, is a 17-year-old born in Cardiff, Wales, with unknown religious affiliation, though his parents are from predominantly Christian Rwanda.

Despite the facts, anti-immigrant protests have erupted in at least 15 cities across Britain, leading to the most significant civil disorder since 2011. Rioters have targeted mosques and hotels housing asylum seekers, with much violence directed at the police.

Prime Minister Starmer has criticised social media companies for allowing violent disinformation to spread. He specifically called out Musk for reinstating banned far-right figures, including activist Tommy Robinson. Technology Secretary Peter Kyle has met with representatives from major tech companies like TikTok, Meta, Google, and X to stress their duty to curb the spread of harmful misinformation.

Publicly, Musk has argued that the government should focus on its duties, mocking Starmer and questioning the UK’s approach to policing speech.

Home Secretary Yvette Cooper has stated that social media has amplified disinformation, promising government action against tech giants and online criminality. However, Britain’s Online Safety Act, which mandates platforms to address illegal content, will be fully effective next year. Meanwhile, the EU’s Digital Services Act, which Britain is no longer part of, is already in effect.

AI writing tools in Apple’s iOS 18.1 come with content restrictions

Apple has introduced its new AI-powered Writing Tools in the iOS 18.1 developer beta, providing users with the ability to reformat or rewrite text using Apple’s AI models. However, the tool warns that AI-generated suggestions might not be of the highest quality when dealing with certain sensitive topics. Users will see a message alerting them when attempting to rewrite text containing swear words, references to drugs, or mentions of violence, indicating the tool wasn’t designed for such content.

Despite the warnings, the AI tool still offers suggestions even when encountering restricted words or phrases. During testing, replacing a swear word with a milder term resulted in the same AI-generated suggestion. Apple has been asked to clarify which specific topics the writing tools are not trained to handle, but no further details have been provided yet.

Apple appears to be exercising caution to avoid controversy by limiting the AI’s handling of certain terms and topics. The Writing Tools feature is not intended to generate new content from scratch but rather to assist in rewriting existing text. Apple’s cautious approach aligns with its history, as seen when it finally allowed autocorrect to learn swear words in iOS 17 after years of restrictions.

The release of these AI features also coincides with Apple’s alignment with OpenAI for future AI innovations and its support for the Biden administration’s AI safety initiatives. These steps underscore Apple’s commitment to responsible AI development while providing advanced tools to its users.

Zuckerberg apologises for Facebook photo error involving Trump

Former President Donald Trump revealed that Meta CEO Mark Zuckerberg apologised to him after Facebook mistakenly labelled a photo of Trump as misinformation. The photo, which showed Trump raising a fist after surviving an assassination attempt at a rally in Butler, Pennsylvania, was initially flagged by Meta’s AI system. Trump disclosed the apology during an interview with FOX Business’ Maria Bartiromo, stating that Zuckerberg called him twice to express regret and praise his response to the event.

Meta Vice President of Global Policy Joel Kaplan clarified that the error occurred due to similarities between a doctored image and the real photo, leading to an incorrect fact-check label. Kaplan explained that the AI system misapplied the label due to subtle differences between the two images. Meta’s spokesperson Andy Stone reiterated that Zuckerberg has not endorsed any candidate for the 2024 presidential election and that the labelling error was not due to bias.

The incident highlights ongoing challenges for Meta as it navigates content moderation and political neutrality, especially ahead of the 2024 United States election. Additionally, the assassination attempt on Trump has sparked various online conspiracy theories. Meta’s AI chatbot faced criticism for initially refusing to answer questions about the shooting, a decision attributed to the overwhelming influx of information during breaking news events. Google’s AI chatbot Gemini similarly refused to address the incident, sticking to its policy of avoiding responses on political figures and elections.

Both Meta and Google have faced scrutiny over their handling of politically sensitive content. Meta’s recent efforts to shift away from politics and focus on other areas, combined with Google’s cautious approach to AI responses, reflect the tech giants’ strategies to manage the complex dynamics of information dissemination and political neutrality in an increasingly charged environment.

Meta restores Malaysian PM’s posts after error

The company formerly known as Facebook apologised on Tuesday for removing content from Malaysian Prime Minister Anwar Ibrahim’s Facebook and Instagram accounts concerning the assassination of Hamas leader Ismail Haniyeh. The posts, which expressed condolences over Haniyeh’s death, were removed, prompting Malaysia to seek an explanation from Meta.

A Meta spokesperson explained that the removal was an operational error and confirmed that the content had been restored with the correct newsworthy label. The apology followed a meeting on Monday between Malaysia‘s communications minister, members of the Prime Minister’s Office, and Meta representatives.

The Prime Minister’s Office condemned Meta’s actions as discriminatory, unjust, and a blatant suppression of free expression. The office issued a statement on Monday expressing their dissatisfaction with the removal of the posts and the need for an explanation from the tech company. Meta’s acknowledgement of the mistake and the restoration of the content aim to address the concerns raised by the Malaysian government.

This incident comes amid other challenges for Meta, including the exclusion of its AI models from the EU market due to regulatory concerns, and a significant fine imposed by Turkey for improper data sharing practices. These issues highlight the growing scrutiny Meta faces globally over content regulation, data privacy, and freedom of expression.

Elon Musk urged to address Grok’s election misinformation

Grok, an AI chatbot on X (formerly Twitter), has been accused of spreading false information about Vice President Kamala Harris’s eligibility for the 2024 presidential ballot. An open letter from five US secretaries of state, led by Minnesota’s Steve Simon, calls for Elon Musk, CEO of X, to address this issue urgently. The letter claims Grok misled users by suggesting ballot deadlines had passed in several states despite them being valid.

The misinformation, disseminated widely before being corrected, has raised concerns about the accuracy of information on X. Although Grok includes a disclaimer urging users to verify facts, the incorrect claims were heavily circulated before being addressed.

The controversy highlights ongoing issues with X’s moderation policies. Under Musk, X has significantly reduced its moderation staff, which has affected its ability to manage misinformation effectively. Additionally, Musk has faced criticism for resharing misleading content and making provocative statements on social media.

The incident underscores X’s challenges in maintaining accurate information and the broader implications for online political discourse.

Bot accounts spread misinformation on X, fuel US conspiracy theories

An investigation by Global Witness revealed that bot-like accounts on the social media platform X have been spreading misinformation and hate ahead of Britain’s election and are now targeting US politics. These accounts, active since late May, generated over four billion impressions and have since shifted focus to significant events related to the presidential election in November.

The watchdog’s report highlighted how these accounts promoted conspiracy theories around the assassination attempt on Donald Trump and false claims about President Joe Biden withdrawing from the race. Despite Elon Musk’s pledges to reduce digital manipulation after purchasing the platform in 2022 for $44 billion, bot activity remains prevalent.

Accounts analysed by Global Witness also spread climate disinformation and participated in anti-migrant protests in Ireland. Ava Lee from Global Witness expressed concern about the ease with which these bots spread division and urged the platform to enhance its moderation efforts to protect democratic processes.

The platform, previously known as Twitter, did not respond to requests for comment. An automated reply from the press team indicated they were busy. Global Witness found no evidence linking British political parties to the bot-like accounts. Meanwhile, Elon Musk faced criticism for sharing a deepfake video of Kamala Harris, further raising concerns about the platform’s role in disseminating disinformation.

Google withdraws AI Olympics ad after backlash

After facing criticism for its portrayal of AI, Google has withdrawn its controversial ad from the Olympics. The ad featured a father using Google’s Gemini AI chatbot to help his daughter write a fan letter to Olympic athlete Sydney McLaughlin-Levrone, which many viewers felt undermined the child’s creativity by replacing it with AI-generated text.

Initially, Google defended the ad, asserting that it demonstrated how Gemini could provide a helpful starting point for writing. However, following widespread feedback, the company decided to pull the ad from its rotation. That move highlights ongoing concerns about AI potentially displacing creative jobs, similar to the backlash faced by Apple earlier this year for a similar ad.

The ad’s removal marks a notable misstep for Google, which aims to position Gemini as a key competitor to OpenAI’s ChatGPT and integrate AI across its products. The incident also underscores broader fears about AI’s impact on creative professions.

TikTok withdraws rewards program from EU to comply with DSA

ByteDance’s TikTok has agreed to permanently withdraw its TikTok Lite rewards program from the EU to comply with the Digital Services Act (DSA), according to the European Commission. The TikTok Lite rewards program allowed users to earn points by engaging in activities like watching videos and inviting friends.

In April, the EU demanded a risk assessment from TikTok on the app shortly after its launch in France and Spain, citing concerns about its potential impact on children and users’ mental health. Under the DSA, large online platforms must report potential risks of new features to the EU before launching and adopting measures to address these risks.

TikTok has made legally binding commitments to withdraw the rewards program from the EU and not to launch any similar program that would bypass this decision. Breaching these commitments would violate the DSA and could lead to fines. Additionally, an investigation into whether TikTok breached online content rules aimed at protecting children and ensuring transparent advertising is ongoing, putting the platform at risk of further penalties.

Russian Foreign Ministry accuses YouTube of politically motivated censorship

Arbitrariness and political censorship are prevalent in the YouTube administration controlled by Washington, according to Russian Foreign Ministry spokesperson Maria Zakharova. She claims that YouTube systematically censors information, beginning with the blocking of accounts from Russian media outlets and government agencies. The official channel of the Russian Foreign Ministry has received unfounded warnings, with some videos being blocked.

Zakharova emphasises that YouTube’s actions constitute direct censorship, violating the rights of subscribers by restricting the free distribution and access to information. She asserts that the United States, which oversees YouTube, has international obligations to uphold freedom of speech, and the actions taken by YouTube contradict these obligations.

Additionally, Alexander Khinshtein, head of the State Duma Committee on Information Policy, mentioned a potential 70% reduction in YouTube download speeds on computers, which would not affect mobile communications. Roskomnadzor later cited disrespect for Russia and numerous legal violations as reasons for actions against YouTube.