India considers social media bans for children under 16

India is emerging as a potential test case for age-based social media restrictions as several states examine Australia-style bans on children’s access to platforms.

Goa and Andhra Pradesh are studying whether to prohibit social media use for those under 16, citing growing concerns over online safety and youth well-being. The debate has also reached the judiciary, with the Madras High Court urging the federal government to consider similar measures.

The proposals carry major implications for global technology companies, given that India’s internet population exceeds one billion users and continues to skew young.

Platforms such as Meta, Google and X rely heavily on India for long-term growth, advertising revenue and user expansion. Industry voices argue parental oversight is more effective than government bans, warning that restrictions could push minors towards unregulated digital spaces.

Australia’s under-16 ban, which entered force in late 2025, has already exposed enforcement difficulties, particularly around age verification and privacy risks. Determining users’ ages accurately remains challenging, while digital identity systems raise concerns about data security and surveillance.

Legal experts note that internet governance falls under India’s federal authority, limiting what individual states can enforce without central approval.

Although the data protection law of India includes safeguards for children, full implementation will extend through 2027, leaving policymakers to balance child protection, platform accountability and unintended consequences.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Anthropic CEO warns of civilisation-level AI risk

Anthropic chief executive Dario Amodei has issued a stark warning that superhuman AI could inflict civilisation-level damage unless governments and industry act far more quickly and seriously.

In a forthcoming essay, Amodei argues humanity is approaching a critical transition that will test whether political, social and technological systems are mature enough to handle unprecedented power.

Amodei believes AI systems will soon outperform humans across nearly every field, describing a future ‘country of geniuses in a data centre’ capable of autonomous and continuous creation.

He warns that such systems could rival nation-states in influence, accelerating economic disruption while placing extraordinary power in the hands of a small number of actors.

Among the gravest dangers, Amodei highlights mass displacement of white-collar jobs, rising biological security risks and the empowerment of authoritarian governments through advanced surveillance and control.

He also cautions that AI companies themselves pose systemic risks due to their control over frontier models, infrastructure and user attention at a global scale.

Despite the severity of his concerns, Amodei maintains cautious optimism, arguing that meaningful governance, transparency and public engagement could still steer AI development towards beneficial outcomes.

Without urgent action, however, he warns that financial incentives and political complacency may override restraint during the most consequential technological shift humanity has faced.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Audi dramatically transforms AI-driven smart factories

Audi is expanding the use of AI in production and logistics by replacing local factory computers with a central cloud platform. The Edge Cloud 4 Production enables flexible, networked automation while reducing hardware needs, maintenance costs, and improving IT security.

AI applications are being deployed to improve efficiency, quality, and employee support. AI-controlled robots are taking over physically demanding tasks, cloud-based systems provide real-time worker guidance, and vision-based solutions detect defects and anomalies early in the production process.

Data-driven platforms such as the P-Data Engine and ProcessGuardAIn allow Audi to monitor manufacturing processes in real time using machine and sensor data. These tools support early fault detection, reduce follow-up costs, and form the basis for predictive maintenance and scalable quality assurance across plants.

Audi is also extending automation to complex production areas that have traditionally relied on manual work, including wiring loom manufacturing and installation. In parallel, the company is working with technology firms and research institutions such as IPAI Heilbronn to accelerate innovation, scale AI solutions, and ensure the responsible use of AI across its global production network.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Snap faces new AI training lawsuit in California

A group of YouTubers has filed a copyright lawsuit against Snap in the US, alleging their videos were used to train AI systems without permission. The case was lodged in a federal court in California and targets AI features used within Snapchat.

The creators claim that Snap relied on large-scale video-language datasets intended initially for academic research. According to the filing in California, access to the material required bypassing YouTube safeguards and license restrictions on commercial use.

The lawsuit in the US seeks statutory damages and a permanent injunction to block further use of the content. The case is led by creators behind the h3h3 channel, alongside two smaller US-based golf channels.

The action adds Snap to a growing list of tech companies facing similar claims in the US. Courts in California and elsewhere continue to weigh how copyright law applies to AI training practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Zurich researchers link AI with spirituality studies

Researchers at the University of Zurich have received a Postdoc Team Award for SpiritRAG, an AI system designed to analyse religion and spirituality in United Nations documents. The interdisciplinary project brings together expertise from Zurich across computer science, linguistics, education and spiritual care.

SpiritRAG connects large language models with more than 7,500 UN texts, allowing users in Zurich and beyond to ask context sensitive questions grounded in original sources. The system addresses challenges where meaning varies across cultures, history and political settings.

The Zurich based team presented SpiritRAG at EMNLP 2025 in Suzhou, China, and later at the AI+X Summit in Zurich. Interest from organisations outside Zurich highlights demand for transparent AI tools supporting research and policy analysis.

Designed as open source infrastructure, SpiritRAG allows deployment with different datasets while using limited resources. Researchers in Zurich say the approach supports responsible AI use in complex domains where accuracy and context remain critical.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Data privacy shifts from breaches to authorised surveillance

Data Privacy Week has returned at a time when personal information is increasingly collected by default rather than through breaches. Campaigns urge awareness, yet privacy is being reshaped by lawful, large-scale data gathering driven by corporate and government systems.

In the US, companies now collect, retain and combine data with AI tools under legal authority, often without meaningful consent. Platforms such as TikTok illustrate how vast datasets are harvested regardless of ownership, shifting debates towards who controls data rather than how much is taken.

US policy responses have focused on national security rather than limiting surveillance itself. Pressure on TikTok to separate from Chinese ownership left data collection intact, while border authorities in the US are seeking broader access to travellers’ digital and biometric information.

Across the US technology sector, privacy increasingly centres on agency rather than secrecy. Data Privacy Week highlights growing concern that once information is gathered, control is lost, leaving accountability lagging behind capability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Facial recognition expansion anchors UK policing reforms driven by AI

UK authorities have unveiled a major policing reform programme that places AI and facial recognition at the centre of future law enforcement strategy. The plans include expanding the use of Live Facial Recognition and creating a national hub to scale AI tools across police forces.

The Home Office will fund 40 new facial recognition vans for town centres across England and Wales, significantly increasing real-time biometric surveillance capacity. Officials say the rollout responds to crime that increasingly involves digital activity.

The UK government will also invest £115 million over three years into a National Centre for AI in Policing, known as Police.AI. The centre will focus on speeding investigations, reducing paperwork and improving crime detection.

New governance measures will regulate police use of facial recognition and introduce a public register of deployed AI systems. National data standards aim to strengthen accountability and coordination across forces.

Structural reforms include creating a National Police Service to tackle serious crime and terrorism. Predictive analytics, deepfake detection and digital forensics will play a larger operational role.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Autonomous AI fails most tasks in virtual company experiment

Researchers at Carnegie Mellon University created a virtual company staffed solely by AI ’employees’ trained on large language models from vendors including Anthropic, OpenAI, and Google, assigning them roles such as financial analyst and software engineer.

In this simulated work environment, the AI agents struggled to complete most tasks, with even the best-performing model only completing about a quarter of its assignments.

The experiment highlighted key weaknesses in current AI systems, including difficulty interpreting nuanced instructions, managing web navigation with pop-ups, and coordinating multi-step workflows without human intervention.

These gaps suggest that human judgement, adaptability and collaboration remain essential in real workplaces for the foreseeable future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK firms prioritise cyber resilience and AI growth

Cybersecurity is set to receive the largest budget increases over the next 12 months, as organisations respond to rising geopolitical tensions and a surge in high-profile cyber-attacks, according to the KPMG Global Tech Report 2026.

More than half of UK firms plan to lift cybersecurity spending by over 10 percent, outpacing global averages and reflecting heightened concern over digital resilience.

AI and data analytics are also attracting substantial investment, with most organisations increasing budgets as they anticipate stronger returns by the end of 2026. Executives expect AI to shift from an efficiency tool to a core revenue driver, signalling a move toward large-scale deployment.

Despite strong investment momentum, scaling remains a major challenge. Fewer than one in 10 organisations report fully deployed AI or cybersecurity systems today, although around half expect to reach that stage within a year.

Structural barriers, fragmented ownership, and unclear accountability continue to slow execution, highlighting the complexity of translating strategy into operational impact.

Agentic AI is emerging as a central focus, with most organisations already embedding autonomous systems into workflows. Demand for specialist AI roles is rising, alongside closer collaboration to ensure secure deployment, governance, and continuous monitoring.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Aquila transforms warehouse operations using AI automation

Aquila has completed a €5 million investment in AI-driven warehouse automation at its Ilfov, Dragomiresti logistics centre. The project is a strategic response to increasing portfolio complexity and growing distribution volumes in the FMCG sector.

The automation solution is built around AI-based vision systems that identify products directly from images using shape, colour and visual characteristics. The technology removes the need for labels or manual scanning, even when packaging orientation or appearance shows minor variations.

According to the company, the system improves the speed and accuracy of warehouse operations while reducing manual work and optimising storage space. These efficiency gains allow better use of operational resources.

The investment enables Aquila to scale logistics operations without proportional increases in resources. The company reports improved internal efficiency, stronger service quality for customers and the creation of medium-term competitive advantages.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot