Turkey blocks access to Instagram

Turkey has blocked access to the social media platform Instagram, according to an announcement by the country’s infotech regulator. The reason or duration of the ban remains undisclosed, but it has also rendered the platform’s mobile app inaccessible.

The decision follows remarks by communications official in Turkey, Fahrettin Altun, who criticised Instagram for allegedly blocking condolence posts regarding the killing of Ismail Haniyeh, a prominent figure in the Palestinian militant group Hamas. Altun labelled Instagram’s action as ‘censorship’ and pointed out that the platform had not provided any policy violation as justification.

Meta Platforms Inc., the parent company of Instagram, has not yet responded to the ban or to Altun’s accusations. The Turkish Information Technologies and Communication Authority (BTK) made the decision public on its website on 2 August.

Google enhances protections against fake explicit content

Google is taking significant steps to address the problem of non-consensual sexually explicit fake content, often referred to as ‘deepfakes,’ that has been increasingly distributed online. Recognising the distress this can cause, Google has updated its policies and systems to help affected individuals more effectively. These updates include easier removal processes for such content from Search and improvements to Google’s ranking systems to prevent this harmful material from appearing prominently in search results.

People have long been able to request the removal of non-consensual explicit imagery from Google Search, but the new changes make this process more accessible. Once a request is successfully made, Google’s systems will also aim to filter out all explicit results related to similar searches. Additionally, if an image is removed under these policies, Google will scan for and remove any duplicates, providing greater peace of mind for those worried about future appearances of the same content.

In tandem with these removal process enhancements, Google is also refining its ranking systems to demote explicit fake content. That includes lowering the ranking of such content for searches that may inadvertently lead to it and promoting high-quality, non-explicit content instead. These changes have already shown promising results, with exposure to explicit image results on certain queries reduced by over 70%. By distinguishing between real and fake explicit content, Google aims to surface legitimate information better while minimising harmful content.

Google acknowledges that more work is needed to tackle this issue comprehensively. The company is committed to ongoing improvements and industry-wide partnerships to address the broader societal challenges of non-consensual explicit fake content. These efforts reflect Google’s dedication to protecting individuals and maintaining the integrity of its search results.

Russia fines Google and TikTok over banned content

Russia’s communications regulator, Roskomnadzor, has fined Alphabet’s Google and TikTok for not complying with orders to remove banned content. The Tagansky district court in Moscow imposed a 5 million rouble ($58,038) fine on Google and a 4 million rouble fine on TikTok. These penalties were issued because both platforms failed to identify content similar to what was previously ordered to be removed.

This is part of a broader effort by Russia over the past several years to enforce the removal of content it considers illegal from foreign technology platforms. Although relatively small, the fines have been persistent, reflecting Russia’s ongoing scrutiny and regulation of online content.

Moscow has been particularly critical of Google, especially for taking down YouTube channels associated with Russian media and public figures. Neither Google nor TikTok immediately responded to requests for comment on the fines.

Malaysia to license social media platforms

Malaysia is introducing a new regulation requiring social media services with over 8 million users nationwide to obtain a license starting 1 August. The new requirement aims to tackle rising cyber offences, including scams and cyberbullying, by ensuring compliance with Malaysian laws.

The Malaysian Communications and Multimedia Commission (MCMC) announced that platforms failing to apply for a license by 1 January 2025 will face legal action. The introduction of this new condition follows directives from the Communications Minister urging social media companies to address government concerns about harmful content.

Why does this matter?

The decision comes amid a rise in harmful social media activity in Malaysia. The government has called on platforms like Meta and TikTok to enhance their content monitoring efforts. Currently, the communications regulator can only flag illegal content, but the final decision to remove it rests with the social media companies.

YouTube faces speed drops in Russia amid tensions

YouTube speeds in Russia are expected to significantly decline on desktop computers due to Google’s failure to upgrade its equipment in the country and its refusal to unblock Russian media channels. The situation has drawn criticism from Alexander Khinshtein, head of the lower house of parliament’s information policy committee, who emphasised that the slowdown is a repercussion of YouTube’s actions. Khinshtein highlighted that download speeds on the platform have already decreased by 40% and could drop by up to 70% next week.

The decline in YouTube quality is attributed to Google’s inaction, particularly its failure to upgrade Google Global Cache servers in Russia. Additionally, Google has not invested in Russian infrastructure and allowed its local subsidiary to go bankrupt, preventing it from covering local data centre expenses. Communications regulator Roskomnadzor has echoed these concerns, indicating that the lack of upgrades has led to deteriorating service quality.

Google has faced multiple fines from Russia for not removing content deemed illegal or undesirable by the Russian government. Following Russia’s invasion of Ukraine in March 2022, YouTube blocked channels associated with Russian state-funded media worldwide, citing its policy against content that denies or trivialises well-documented violent events. Subsequently, Google’s Russian subsidiary filed for bankruptcy, citing Russian authorities’ seizure of its bank account as the reason for its inability to function. Meanwhile, some Russian officials, including Chechen leader Ramzan Kadyrov, have proposed blocking YouTube entirely in response to the ongoing tensions.

Meta oversight board calls for clearer rules on AI-generated pornography

Meta’s Oversight Board has criticised the company’s rules on sexually explicit AI-generated depictions of real people, stating they are ‘not sufficiently clear.’ That follows the board’s review of two pornographic deepfakes of famous women posted on Meta’s Facebook and Instagram platforms. The board found that both images violated Meta’s policy against ‘derogatory sexualised photoshop,’ which is considered bullying and harassment and should have been promptly removed.

In one case involving an Indian public figure, Meta failed to act on a user report within 48 hours, leading to an automatic ticket closure. The image was only removed after the board intervened. In contrast, Meta’s systems automatically took down the image of an American celebrity. The board recommended that Meta clarify its rules to cover a broader range of editing techniques, including generative AI. It criticised the company for not adding the Indian woman’s image to a database for automatic removals.

Meta has stated it will review the board’s recommendations and update its policies accordingly. The board emphasised the importance of removing harmful content to protect those impacted, noting that many victims of deepfake intimate images are not public figures and struggle to manage the spread of non-consensual depictions.

US Senate passes bill to combat AI deepfakes

The US Senate has unanimously passed the DEFIANCE Act, allowing victims of nonconsensual intimate images created by AI, known as deepfakes, to sue their creators for damages. The bill enables victims to pursue civil remedies against those who produced or distributed sexually explicit deepfakes with malicious intent. Victims identifiable in these deepfakes can receive up to $150,000 in damages and up to $250,000 if linked to sexual assault, stalking, or harassment.

The legislative move follows high-profile incidents, such as AI-generated explicit images of Taylor Swift appearing on social media and similar cases affecting high school girls across the country. Senate Majority Leader Chuck Schumer emphasised the widespread impact of malicious deepfakes, highlighting the urgent need for protective measures.

Schumer described the DEFIANCE Act as part of broader efforts to implement AI safeguards to prevent significant harm. He called on the House to pass the bill, which has a companion bill awaiting consideration. Schumer assured victims that the government is committed to addressing the issue and protecting individuals from the abuses of AI technology.

China’s new video-generating AI faces limitations due to political censorship

A new AI video-generating model, Kling, developed by Beijing-based Kuaishou, is now widely available but with significant limitations. Initially launched in a waitlisted access for users with Chinese phone numbers, Kling can now be accessed by anyone providing their email. The model generates five-second videos based on user prompts, simulating physics like rustling leaves and flowing water with a resolution of 720p.

However, Kling censors politically sensitive topics. Prompts related to ‘Democracy in China,’ ‘Chinese President Xi Jinping,’ and ‘Tiananmen Square protests’ result in error messages. The censorship occurs at the prompt level, allowing for the generation of videos related to these topics as long as they are not explicitly mentioned.

That behaviour likely stems from intense political pressure from the Chinese government. The Cyberspace Administration of China (CAC) is actively testing AI models to ensure they align with core socialist values and has proposed a blacklist of sources for training AI models. Companies must prepare models that produce ‘safe’ answers to thousands of questions, which may slow China’s AI development and create two classes of models: those heavily filtered and those less so.

The dichotomy raises questions about the broader implications for the AI ecosystem, as restrictive policies may hinder technological advancement and innovation.

Trump allies hinder disinformation research leading up to US election

A legal campaign led by allies of former US president Donald Trump requested an investigation within the misinformation research field, claiming an alleged conspiracy to censor conservative voices online. Under this investigation, academics in the field who worked at tracking election misinformation online were scrutinised daily, including regular scanning of their correspondence with AI software and searching for messages from government agencies or tech companies.

Disinformation has proliferated online as the US election approaches, especially after significant events such as the assassination attempt on Trump and President Biden’s withdrawal from the race. Due to the political scrutiny, researchers held back from publicly reporting some of their insights on misinformation issues related to public affairs.

Last month, the Supreme Court reversed a lower-court ruling restricting tech companies and the government from communicating about misinformation online. But the ruling hasn’t deterred Republicans from bringing lawsuits and sending a string of legal demands.

According to the investigation by The Washington Post, the GOP campaign has eroded the once thriving ecosystem of academics, nonprofits and tech industry initiatives dedicated to addressing the spread of misinformation online. Many prominent researchers in the field, like Claire Wardle, Stefanie Friedhoff, Ryan Calo and Kate Starbird, have expressed their concerns for academic freedom and democracy.

Social media platforms asked to tackle cybercrimes in Malaysia

Malaysia is urging social media platforms to strengthen their efforts in combating cybercrimes, including scams, cyberbullying, and child pornography. The government has seen a significant rise in harmful online content and has called on companies like Meta and TikTok to enhance their monitoring and enforcement practices.

In the first quarter of 2024 alone, Malaysia reported 51,638 cases of harmful content referred to social media platforms, surpassing the 42,904 cases from the entire previous year. Communications Minister Fahmi Fadzil noted that some platforms are more cooperative than others, with Meta showing the highest compliance rates—85% for Facebook, 88% for Instagram, and 79% for WhatsApp. TikTok followed with a 76% compliance rate, while Telegram and X had lower rates.

The government has directed social media firms to address these issues more effectively, but it is up to the platforms to remove content that violates their community guidelines. Malaysia’s communications regulator continues highlighting problematic content to these firms, aiming to curb harmful online activity.