Grok to get new AI video detection tools, Musk says

Musk said Grok will analyse bitstreams for AI signatures and scan the web to verify the origins of videos. Grok added that it will detect subtle AI artefacts in compression and generation patterns that humans cannot see.

AI tools such as Grok Imagine and Sora are reshaping the internet by making realistic video generation accessible to anyone. The rise of deepfakes has alarmed users, who warn that high-quality fake videos could soon be indistinguishable from real footage.

A user on X expressed concern that leaders are not addressing the growing risks. Elon Musk responded, revealing that his AI company xAI is developing Grok’s ability to detect AI-generated videos and trace their origins online.

The detection features aim to rebuild trust in digital media as AI-generated content spreads. Commentators have dubbed the flood of such content ‘AI slop’, raising concerns about misinformation and consent.

Concerns about deepfakes have grown since OpenAI launched the Sora app. A surge in deepfake content prompted OpenAI to tighten restrictions on cameo mode, allowing users to opt out of specific scenarios.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Age verification and online safety dominate EU ministers’ Horsens meeting

EU digital ministers are meeting in Horsens on 9–10 October to improve the protection of minors online. Age verification, child protection, and digital sovereignty are at the top of the agenda under the Danish EU Presidency.

The Informal Council Meeting on Telecommunications is hosted by the Ministry of Digital Affairs of Denmark and chaired by Caroline Stage. European Commission Executive Vice-President Henna Virkkunen is also attending to support discussions on shared priorities.

Ministers are considering measures to prevent children from accessing age-inappropriate platforms and reduce exposure to harmful features like addictive designs and adult content. Stronger safeguards across digital services are being discussed.

The talks also focus on Europe’s technological independence. Ministers aim to enhance the EU’s digital competitiveness and sovereignty while setting a clear direction ahead of the Commission’s upcoming Digital Fairness Act proposal.

A joint declaration, ‘The Jutland Declaration’, is expected as an outcome. It will highlight the need for stronger EU-level measures and effective age verification to create a safer online environment for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Facebook and Instagram Reels get multilingual boost with Meta AI

Meta has introduced new AI-powered translation features that allow Facebook and Instagram users to enjoy reels from around the world in multiple languages.

Meta AI now translates, dubs, and lip-syncs short videos in English, Spanish, Hindi, and Portuguese, with more languages to be added soon.

A tool that reproduces a creator’s voice and tone while automatically syncing translated audio to their lip movements, providing a natural viewing experience. It is free for Facebook creators with over 1,000 followers and all public Instagram accounts in countries where Meta AI is available.

The expansion is part of Meta’s goal to make global content more accessible and to help creators reach wider audiences. By breaking language barriers, Meta aims to strengthen community connections and turn Reels into a platform for global cultural exchange.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Retailers face new pressure under California privacy law

California has entered a new era of privacy and AI enforcement after the state’s privacy regulator fined Tractor Supply USD1.35 million for failing to honour opt-outs and ignoring Global Privacy Control signals. The case marks the largest penalty yet from the California Privacy Protection Agency.

In California, there is a widening focus on how companies manage consumer data, verification processes and third-party vendors. Regulators are now demanding that privacy signals be enforced at the technology layer, not just displayed through website banners or webforms.

Retailers must now show active, auditable compliance, with clear privacy notices, automated data controls and stronger vendor agreements. Regulators have also warned that businesses will be held responsible for partner failures and poor oversight of cookies and tracking tools.

At the same time, California’s new AI law, SB 53, extends governance obligations to frontier AI developers, requiring transparency around safety benchmarks and misuse prevention. The measure connects AI accountability to broader data governance, reinforcing that privacy and AI oversight are now inseparable.

Executives across retail and technology are being urged to embed compliance and governance into daily operations. California’s regulators are shifting from punishing visible lapses to demanding continuous, verifiable proof of compliance across both data and AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ant Group launches trillion-parameter AI model Ling-1T

Ant Group has unveiled its Ling AI model family, introducing Ling-1T, a trillion-parameter large language model that has been open-sourced for public use.

The Ling family now includes three main series: the Ling non-thinking models, the Ring thinking models, and the multimodal Ming models.

Ling-1T delivers state-of-the-art performance in code generation, mathematical reasoning, and logical problem-solving, achieving 70.42% accuracy on the 2025 AIME benchmark.

A model that combines efficient inference with strong reasoning capabilities, marking a major advance in AI development for complex cognitive tasks.

Company’s Chief Technology Officer, He Zhengyu, said that Ant Group views AGI as a public good that should benefit society.

The release of Ling-1T and the earlier Ring-1T-preview underscores Ant Group’s commitment to open, collaborative AI innovation and the development of inclusive AGI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ID data from 70,000 Discord users exposed in third-party breach

Discord has confirmed that official ID images belonging to around 70,000 users may have been exposed in a cyberattack targeting a third-party service provider. The platform itself was not breached, but hackers targeted a company involved in age verification processes.

The leaked data may include personal information, partial credit card details, and conversations with Discord’s customer service agents. No full credit card numbers, passwords, or activity beyond support interactions were affected. Impacted users have been contacted, and law enforcement is investigating.

The platform has revoked the support provider’s access to its systems and has not named the third party involved. Zendesk, a customer service software supplier to Discord, said its own systems were not compromised and denied being the source of the breach.

Discord has rejected claims circulating online that the breach was larger than reported, calling them part of an attempted extortion. The company stated it would not comply with demands from the attackers. Cybercriminals often sell personal information on illicit markets for use in scams.

ID numbers and official documents are especially valuable because, unlike credit card details, they rarely change. Discord previously tightened its age-verification measures following concerns over the misuse of some servers to distribute illegal material.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European Commission launches Apply AI and AI in Science strategies

Countries are racing to harness AI, and the European Commission has unveiled two strategies to maintain Europe’s competitiveness. Apply AI targets faster adoption across industries and the public sector, while AI in Science focuses on boosting Europe’s research leadership.

Commission President Ursula von der Leyen stated that Europe must shape AI’s future by balancing innovation and safety. The European Commission is mobilising €1 billion to boost adoption in healthcare, manufacturing, energy, defence, and culture, while supporting SMEs.

Measures include creating AI-powered screening centres for healthcare, backing frontier models, and upgrading testing infrastructure. An Apply AI Alliance will unite industry, academia, civil society, and public bodies to coordinate action, while an AI Observatory will monitor sector trends and impacts.

The AI in Science Strategy centres on RAISE, a new virtual institute to pool and coordinate resources for applying AI in research. Investments include €600 million in compute power through Horizon Europe and €58 million for talent networks, alongside plans to double annual AI research funding to over €3 billion.

The EU aims to position itself as a global hub for trustworthy and innovative AI by linking infrastructure, data, skills, and investment. Upcoming events, such as the AI in Science Summit in Copenhagen, will showcase new initiatives as Europe pushes to translate its AI ambitions into tangible outcomes.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

California enacts landmark AI whistleblower law

California has enacted SB 53, offering legal protection to employees reporting AI risks or safety concerns. The law covers companies using large-scale computing for AI model training, focusing on leading developers and exempting smaller firms.

It also mandates transparency, requiring risk mitigation plans, safety test results, and reporting of critical safety incidents to the California Office of Emergency Services (OES).

The legislation responds to calls from industry insiders, including former OpenAI and DeepMind employees, who highlighted restrictive offboarding agreements that silenced criticism and limited public discussion of AI risks.

The new law protects employees who have ‘reasonable cause’ to believe a catastrophic risk exists, defined as endangering 50 lives or causing $1 billion in damages. It allows them to report concerns to regulators, the Attorney General, or management without fear of retaliation.

While experts praise the law as a crucial step, they note its limitations. The protections focus on catastrophic risks, leaving smaller but significant harms unaddressed.

Harvard law professor Lawrence Lessig emphasises that a lower ‘good faith’ standard for reporting would simplify protections for employees, though it is currently limited to internal anonymous channels.

The law reflects growing recognition of the stakes in frontier AI, balancing the need for innovation with safeguards that encourage transparency. Advocates stress that protecting whistleblowers is essential for employees to raise AI concerns safely, even at personal or financial risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI expands ChatGPT Go to 16 new Asian markets

The US startup OpenAI has broadened access to its affordable ChatGPT Go plan, now available in 16 additional countries across Asia, including Malaysia, Vietnam, the Philippines, Pakistan, and Thailand.

Priced at under $5 per month, the plan offers local currency payments in select regions, while others will pay in USD with tax-adjusted variations.

ChatGPT Go gives users higher message and image-generation limits, increased upload capacity, and double the memory of the free plan.

A move that follows significant regional growth (Southeast Asia’s weekly active users increasing fourfold) and builds on earlier launches in India and Indonesia, where paid subscriptions have already doubled.

The expansion intensifies competition with Google, which recently introduced its Google AI Plus plan in more than 40 countries. Both companies are vying to attract users in fast-growing markets with low-cost AI access, each blending productivity and creative tools into subscription offerings.

At OpenAI’s DevDay 2025 in San Francisco, CEO Sam Altman announced that ChatGPT’s global weekly active users have reached 800 million.

OpenAI is also introducing in-chat applications from partners like Spotify, Zillow, and Coursera, signalling a shift toward transforming ChatGPT into a broader AI platform ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!