Delhi High Court directs Google and Microsoft to challenge NCII images removal order

The Delhi High Court has directed Google and Microsoft to file a review petition seeking the recall of a previous order mandating search engines to promptly restrict access to non-consensual intimate images (NCII) without necessitating victims to provide specific URLs repeatedly. Both tech giants argued the technological infeasibility of identifying and proactively taking down NCII images, even with the assistance of AI tools.

The court’s order stems from a 2023 ruling requiring search engines to remove NCII within 24 hours, as per the IT Rules, 2021, or risk losing their safe harbour protections under Section 79 of the IT Act, 2000. It proposed issuing a unique token upon initial takedown, with search engines responsible for turning off any resurfaced content using pre-existing technology to alleviate the burden on victims of tracking and repeatedly reporting specific URLs. Moreover, the court suggested leveraging hash-matching technology and developing a ‘trusted third-party encrypted platform’ for victims to register NCII content or URLs, shifting the responsibility of identifying and removing resurfaced content away from victims and onto the platform while ensuring utmost transparency and accountability standards.

However, Google expressed concerns regarding automated tools’ inability to discern consent in shared sexual content, potentially leading to unintended takedowns and infringing on free speech, echoing Microsoft’s apprehension about the implications of proactive monitoring on privacy and freedom of expression.

Australian court reverses block on X regarding church stabbing video

An Australian court has denied the cyber safety regulator’s attempt to extend an order for Elon Musk’s X to block videos depicting the stabbing of an Assyrian church bishop, labelled as a terrorist attack. The Federal Court judge, Geoffrey Kennett, rejected the bid to prolong the injunction, with reasons for the decision to be disclosed later.

The legal clash has fueled tensions between Musk and senior figures in Australia, including Prime Minister Anthony Albanese, who criticised Musk as ‘an arrogant billionaire’ for resisting the video’s takedown. Musk responded by posting memes, condemning the regulatory order as censorship. While other platforms like Meta swiftly removed the content upon request, X has been persistent in its refusal to remove the posts globally, arguing against one country’s rules dictating internet content.

Last month, the Federal Court upheld the eSafety Commissioner’s order for X to remove 65 posts containing the violent footage of the bishop’s stabbing during a sermon in Sydney. The incident, for which a 16-year-old boy has been charged with a terrorism offence, prompted Australia to block local access to the posts. However, the regulator contested X’s proposal to geo-block Australians, claiming it was ineffective due to the widespread use of virtual private networks to conceal users’ locations.

In response to the rising concerns over social media influence, Albanese’s government has announced plans for a parliamentary inquiry to investigate the adverse effects of online platforms. The inquiry aims to address the control social media exerts over Australians’ online content consumption, highlighting a lack of oversight.

OpenAI considers allowing AI-generated pornography

OpenAI is sparking debate by considering the possibility of allowing users to generate explicit content, including pornography, using its AI-powered tools like ChatGPT and DALL-E. While maintaining a ban on deepfakes, OpenAI’s proposal has raised concerns among campaigners who question its commitment to producing ‘safe and beneficial’ AI. The company sees potential for ‘not-safe-for-work’ (NSFW) content creation but stresses the importance of responsible usage and adherence to legal and ethical standards.

The proposal, outlined in a document discussing OpenAI’s AI development practices, aims to initiate discussions about the boundaries of content generation within its products. Joanne Jang, an OpenAI employee, stressed the need for maximum user control while ruling out deepfake creation. Despite acknowledging the importance of discussions around sexuality and nudity, OpenAI maintains strong safeguards against deepfakes and prioritises protecting users, particularly children.

Critics, however, have accused OpenAI of straying from its mission statement of developing safe and beneficial AI by delving into potentially harmful commercial endeavours like AI erotica. Concerns about the spread of AI-generated pornography have been underscored by recent incidents, prompting calls for tighter regulation and ethical considerations in the tech sector. While OpenAI’s policies prohibit sexually explicit content, questions remain about the effectiveness of safeguards and the company’s approach to handling sensitive content creation.

Why does it matter?

As discussions unfold, stakeholders, including lawmakers, experts, and campaigners, closely scrutinise OpenAI’s proposal and its potential implications for online safety and ethical AI development. With growing concerns about the misuse of AI technology, the debate surrounding OpenAI’s stance on explicit content generation highlights broader challenges in balancing innovation, responsibility, and societal well-being in the digital age.

Australia launches parliamentary inquiry into social media’s negative impact

Australia is taking stringent measures by announcing a parliamentary inquiry into the impact of social media platforms. The legal step is a response to the growing concerns over their influence on public discourse and the alarming spread of harmful content. Prime Minister Anthony Albanese, in his address, underscored the need for greater scrutiny, acknowledging that while social media can be a force for good, it also wields an impactful negative influence, particularly on issues as grave as domestic violence and radicalisation.

The government’s move comes amid criticism of platforms like Meta’s Facebook, ByteDance’s TikTok, and Elon Musk’s X for handling violent posts and content moderation. X, in particular, is embroiled in a legal dispute with the Australian government over its refusal to globally remove videos of a recent stabbing attack on an Assyrian church bishop in Sydney. The government argues for broader content removal, while Musk has characterised the decision as censorship.

The inquiry will also examine Meta’s decision to stop paying for news content in Australia, reflecting broader concerns about the role of social media in shaping public discourse and its impact on traditional media. Communications Minister Michelle Rowland stressed the importance of understanding how social media companies regulate content and called for greater accountability in their decision-making processes.

As Parliament gears up for the inquiry, the terms and scope are still being determined. The aim is to scrutinise the practices of social media companies and make recommendations for accountability measures. The inquiry may involve summoning individuals to testify, a move that underscores the government’s commitment to addressing concerns surrounding social media regulation and content moderation. The outcomes of this inquiry will be crucial in shaping the future of social media regulation, making it a process of utmost relevance and impact.

Musk’s X challenges Australian law on content removal

A legal battle between Elon Musk’s X and the Australian cyber regulator has intensified over the removal of 65 posts showing a video of an Assyrian Christian bishop being stabbed during a sermon in Sydney. The eSafety Commissioner ordered the removal, citing concerns of terrorism. X argues against global removal, asserting that one country’s laws shouldn’t dictate internet content worldwide.

Representing the regulator, Tim Begbie emphasised that while X has policies to remove harmful content, it shouldn’t override Australian law. He criticised X’s stance, stating that refusal to remove content globally affects the definition of ‘reasonable’ within Australia’s Online Safety Act. Despite X’s geo-blocking attempts, Begbie argued it’s ineffective due to VPN usage.

Bret Walker, X’s lawyer, defended the company’s actions, stressing the need for global access to newsworthy content. He expressed concern over restricting global access based on Australian laws and emphasised the importance of allowing individuals to form their own opinions.

The Federal Court of Australia has extended a temporary takedown order on the posts until 10 June, delaying a final decision. The case underscores the debate over internet regulation and free speech, with implications for global content moderation and national sovereignty.

Meta Platforms faces heavy fine in Turkey over data-sharing

Turkey’s competition board has levied a substantial fine of 1.2 billion lire ($37.20 million) against Meta Platforms following investigations into data-sharing practices across its social media platforms, including Facebook, Instagram, WhatsApp, and Threads. The board launched an inquiry last December, particularly focusing on potential competition law violations related to integrating Threads and Instagram.

As part of its findings, the competition board imposed an interim measure in March to restrict data sharing between Threads and Instagram. In response, Meta announced the temporary shutdown of Threads in Turkey to comply with the interim order, reflecting the company’s efforts to adhere to regulatory directives.

The fine encompasses two separate investigations, with 898 million lira attributed to the compliance process and investigations related to Facebook, Instagram, and WhatsApp, and an additional 336 million lira for the inquiry into Threads. The board’s decision emphasises the importance of user consent and notification regarding data usage, ensuring transparency and control over personal data across Meta’s platforms.

Previously, the competition board had imposed fines on Meta, including daily penalties for insufficient documentation and notifications about data-sharing. While these penalties concluded on 3 May 2024, the recent fine extends the ongoing regulatory scrutiny over Meta’s business practices, echoing similar actions taken by regulatory authorities globally to ensure compliance with competition and data protection laws.

TikTok adopts technology to label AI-generated content

TikTok has announced that it will implement a new technology called ‘Content Credentials’ to label images and videos created by AI on its platform. Developed by Adobe, this digital watermark technology aims to address concerns regarding the authenticity and potential misuse of AI-generated content, particularly in relation to the upcoming US elections.

The Content Credentials technology will attach digital watermarks to indicate how images and videos were created and edited, allowing users to distinguish content produced by humans from AI-generated content. TikTok’s adoption of this technology follows in the footsteps of other companies like OpenAI and is part of a broader effort by tech giants to combat the potential use of AI-generated content for misinformation purposes. YouTube and Meta Platforms, which owns Instagram and Facebook, have also expressed their intention to adopt Content Credentials, even though they already use AI-generated content labelling tools.

For the watermark system to be effective, both the creators of the generative AI tools and the platforms must agree to use the industry standard. For example, if an image is generated using OpenAI’s Dall-E tool, a watermark will automatically be attached to the resulting image. If this marked image is then uploaded to TikTok, it will be labelled as AI-generated content.

Why does it matter?

While TikTok already labels AI-generated content created within its app, this new initiative expands the labelling to content generated outside the TikTok ecosystem. By doing so, TikTok aims to enhance control over disseminating AI-generated material and maintain transparency for its user community.

Furthermore, the decision to regulate the content on its platform comes amid the ongoing legal battle between TikTok’s parent company, ByteDance, and the US government. ByteDance has been ordered to divest TikTok due to national security concerns, but it has filed a lawsuit arguing that this requirement violates the First Amendment. This legal dispute adds another layer of complexity to TikTok’s operations and its future in the United States.

EU seeks details on X’s content moderation practices

The European Commission has taken a significant step in its investigation of company X under the Digital Services Act (DSA). On 8 May 2024, the Commission sent a request for information (RFI) to X, seeking detailed insights into its content moderation practices, particularly in light of a recent Transparency report highlighting a nearly 20% reduction in X’s content moderation team since October 2023. The reduction has diminished linguistic coverage within the EU, from 11 languages to 7.

Furthermore, the European Commission is keen on understanding X’s risk assessments and mitigation strategies concerning generative AI tools, especially their potential impact on electoral processes, dissemination of illegal content, and protection of fundamental rights. The investigation follows formal proceedings initiated against X in December 2023, examining potential breaches of the DSA related to risk management, content moderation, dark patterns, advertising transparency, and data access for researchers.

The request for information is part of an ongoing investigation, building upon prior evidence gathering and analysis, including X’s Transparency report released in March 2024 and its responses to previous information requests. X has been given deadlines to provide the requested information, with 17 May 2024 set for content moderation resources and generative AI-related data and 27 May 2024 for remaining inquiries. Failure to comply could result in fines or penalties imposed by the Commission, as stipulated under Article 74(2) of the DSA.

Ukraine raises alarm over Russia’s TikTok tactics

Ukraine has issued a warning about Russia’s escalating use of TikTok to challenge President Volodymyr Zelenskiy’s legitimacy and erode national morale amid Russia’s military actions. Russian influencers and bots are reportedly behind viral TikTok videos targeting 20 May, the date when Zelenskiy’s first term would have ended if not for election disruptions due to martial law. Andriy Kovalenko, a senior official focused on countering Russian misinformation, highlighted Russia’s systematic approach to TikTok, exploiting the platform to sway public opinion.

As Russia continues its military campaign against Ukraine, it has expanded its information warfare to platforms like TikTok alongside traditional battlegrounds. The use of TikTok to disseminate misinformation represents a strategic shift in Russia’s multifaceted approach to influencing public perception and leveraging its advantage in cyberspace. TikTok, owned by ByteDance, has responded by enhancing safety measures and removing harmful misinformation in Ukraine amid broader scrutiny over data security and misinformation concerns from the US and the EU.

In response to these challenges, Ukraine advocates for greater cooperation from social media companies like TikTok by urging them to establish full-scale offices in Kyiv to combat disinformation effectively. Kovalenko, who actively uses TikTok to counter false narratives, emphasised the need to adapt Ukraine’s approach to this influential platform. The call for action by Kovalenko comes as TikTok reports uncovering covert influence operations related to Ukraine conflict and removing millions of problematic videos during the last quarter.

Why does it matter?

Ukraine’s efforts to confront Russia’s information campaign on TikTok reflect broader concerns over the app’s influence and security. While governments like the US and the EU take measures to safeguard against potential threats posed by platforms like TikTok, the ongoing geopolitical dynamics and the use of social media as a battleground highlight the complex challenges digital technologies pose in the modern information landscape.

Tech firms urged to implement child safety measures in UK

Social media platforms such as Facebook, Instagram, and TikTok face proposed measures in the UK to modify their algorithms and better safeguard children from harmful content. These measures, outlined by regulator Ofcom, are part of the broader Online Safety Act and include implementing robust age checks to shield children from harmful material related to sensitive topics like suicide, self-harm, and pornography.

Ofcom’s Chief Executive, Melanie Dawes, has underscored the situation’s urgency, emphasising the necessity of holding tech firms accountable for protecting children online. She asserts that platforms must reconfigure aggressive algorithms that push harmful content to children and incorporate age verification mechanisms.

The utilisation of complex algorithms by social media companies to curate content has raised serious concerns. These algorithms often amplify harmful material, potentially influencing children negatively. The proposed measures seek to address this issue by urging platforms to reevaluate their algorithmic systems to prioritize child safety by providing children with a safer online experience tailored to their age.

UK’s Technology Secretary, Michelle Donelan, called for social media platforms to engage with regulators and proactively implement these measures, cautioning against waiting for enforcement and potential fines. After a consultation, Ofcom plans to finalise its Children’s Safety Codes of Practice within a year, with anticipated enforcement actions, including penalties for non-compliance, once parliament approves.