UNESCO ethics framework guides national AI roadmap in Lao PDR

Lao PDR has unveiled plans for a national AI strategy guided by UNESCO’s ethics framework to support responsible and inclusive digital development. The framework will inform policy design across governance, education, infrastructure, and economic transformation.

The assessment outlines Laos’ readiness to govern AI, noting progress in digital policy alongside gaps in access, skills, and research capacity. Officials stressed the need for homegrown AI solutions that respect culture, reduce inequality, and deliver broad social benefit.

UNESCO and the UN Country Team said the strategy aligns with Laos’ broader digital transformation goals under its 10th development plan. The initiative aims to improve coordination, increase R&D investment, and modernise education to support ethical AI deployment.

Lao PDR joins 77 countries worldwide using UNESCO’s tools to shape national AI policies, reinforcing its commitment to sustainable innovation, ethical governance, and inclusive growth as artificial intelligence becomes central to future development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

India considers social media bans for children under 16

India is emerging as a potential test case for age-based social media restrictions as several states examine Australia-style bans on children’s access to platforms.

Goa and Andhra Pradesh are studying whether to prohibit social media use for those under 16, citing growing concerns over online safety and youth well-being. The debate has also reached the judiciary, with the Madras High Court urging the federal government to consider similar measures.

The proposals carry major implications for global technology companies, given that India’s internet population exceeds one billion users and continues to skew young.

Platforms such as Meta, Google and X rely heavily on India for long-term growth, advertising revenue and user expansion. Industry voices argue parental oversight is more effective than government bans, warning that restrictions could push minors towards unregulated digital spaces.

Australia’s under-16 ban, which entered force in late 2025, has already exposed enforcement difficulties, particularly around age verification and privacy risks. Determining users’ ages accurately remains challenging, while digital identity systems raise concerns about data security and surveillance.

Legal experts note that internet governance falls under India’s federal authority, limiting what individual states can enforce without central approval.

Although the data protection law of India includes safeguards for children, full implementation will extend through 2027, leaving policymakers to balance child protection, platform accountability and unintended consequences.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Austrian watchdog rules against Microsoft education tracking

Microsoft has been found to have unlawfully placed tracking cookies on a child’s device without valid consent, following a ruling by Austria’s data protection authority.

The case stems from a complaint filed by a privacy group, noyb, concerning Microsoft 365 Education, a platform used by millions of pupils and teachers across Europe.

According to the decision, Microsoft deployed cookies that analysed user behaviour, collected browser data and served advertising purposes, despite being used in an educational context involving minors. The Austrian authority ordered the company to cease the unlawful tracking within four weeks.

Noyb warned the ruling could have broader implications for organisations relying on Microsoft software, particularly schools and public bodies. A data protection lawyer at the group criticised Microsoft’s approach to privacy, arguing that protections appear secondary to marketing considerations.

The ruling follows earlier GDPR findings against Microsoft, including violations of access rights and concerns raised over the European Commission’s own use of Microsoft 365.

Although previous enforcement actions were closed after contractual changes, regulatory scrutiny of Microsoft’s education and public sector products continues.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Anthropic CEO warns of civilisation-level AI risk

Anthropic chief executive Dario Amodei has issued a stark warning that superhuman AI could inflict civilisation-level damage unless governments and industry act far more quickly and seriously.

In a forthcoming essay, Amodei argues humanity is approaching a critical transition that will test whether political, social and technological systems are mature enough to handle unprecedented power.

Amodei believes AI systems will soon outperform humans across nearly every field, describing a future ‘country of geniuses in a data centre’ capable of autonomous and continuous creation.

He warns that such systems could rival nation-states in influence, accelerating economic disruption while placing extraordinary power in the hands of a small number of actors.

Among the gravest dangers, Amodei highlights mass displacement of white-collar jobs, rising biological security risks and the empowerment of authoritarian governments through advanced surveillance and control.

He also cautions that AI companies themselves pose systemic risks due to their control over frontier models, infrastructure and user attention at a global scale.

Despite the severity of his concerns, Amodei maintains cautious optimism, arguing that meaningful governance, transparency and public engagement could still steer AI development towards beneficial outcomes.

Without urgent action, however, he warns that financial incentives and political complacency may override restraint during the most consequential technological shift humanity has faced.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New AI model detects wide range of health risks via sleep analysis

Recent research indicates that AI applied to sleep pattern analysis can identify signals linked to over 130 health conditions, including heart disease, metabolic dysfunction and respiratory issues, from a single night’s sleep record.

By using machine learning to analyse detailed physiological data collected during sleep, AI models may reveal subtle patterns that correlate with existing or future health risks.

Proponents suggest that this technology could support early detection and preventative healthcare by offering a non-invasive way to screen for multiple conditions simultaneously, potentially guiding timely medical intervention.

However, clinicians stress that such AI tools should complement, not replace, formal medical evaluation and diagnosis.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and robots join forces in NHS trial to improve cancer diagnosis

The NHS England has launched an innovative pilot project that uses AI software to rapidly analyse lung scans and flag suspect nodules, followed by a robotic bronchoscopy system that can reach deep lung spots previously hard to biopsy.

This approach could replace weeks of repeat scans and invasive procedures with a single targeted session, helping doctors diagnose or rule out cancer sooner.

The project, led at Guy’s and St Thomas’ NHS Foundation Trust, aims to support expanded national lung screening programmes and reduce health outcome inequalities by detecting cancers at an earlier, more treatable stage.

Officials describe the technology as a ‘glimpse of the future’ of cancer detection, while pilots will gather evidence on effectiveness and safety before wider rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta tests paid features on Facebook, Instagram and WhatsApp

Subscriptions for Facebook, Instagram and WhatsApp are set to be tested as Meta explores new revenue streams while keeping core access free. Paid tiers would place selected features and advanced sharing controls behind a subscription.

Early signals indicate the subscriptions could launch within months, with each platform offering its own set of premium tools. Meta has confirmed it will trial multiple formats rather than rely on a single bundled model.

AI plays a central role in the plan, with subscribers gaining access to AI-powered features, including video generation. The recently acquired Manus AI agent will be integrated across Meta services and offered separately to business users.

User reaction is expected to influence how far the company pushes the model, including potential bundles or platform-specific pricing. Wider acceptance could encourage other social networks to adopt similar subscription strategies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia’s green energy under pressure

The renewable energy sector in Australia encounters new challenges as major tech companies establish AI data centres across the country. Projects once planned to export solar power internationally are now influenced by domestic energy demands.

Sun Cable, supported by billionaires Mike Cannon-Brookes and Andrew Forrest, aimed to deliver Australian solar energy to Singapore via a 4,300-kilometre sea cable. The project symbolised a vision for Australia to become a leading exporter of renewable electricity.

The rapid expansion of AI facilities is shifting energy priorities towards domestic infrastructure. Tech companies’ demand for electricity is creating new competition with planned renewable export projects.

Energy policy decisions now carry broader implications for emissions, the national grid, and Australia’s role in the global clean energy market. Careful planning will be essential to balance domestic growth with long-term renewable ambitions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Audi dramatically transforms AI-driven smart factories

Audi is expanding the use of AI in production and logistics by replacing local factory computers with a central cloud platform. The Edge Cloud 4 Production enables flexible, networked automation while reducing hardware needs, maintenance costs, and improving IT security.

AI applications are being deployed to improve efficiency, quality, and employee support. AI-controlled robots are taking over physically demanding tasks, cloud-based systems provide real-time worker guidance, and vision-based solutions detect defects and anomalies early in the production process.

Data-driven platforms such as the P-Data Engine and ProcessGuardAIn allow Audi to monitor manufacturing processes in real time using machine and sensor data. These tools support early fault detection, reduce follow-up costs, and form the basis for predictive maintenance and scalable quality assurance across plants.

Audi is also extending automation to complex production areas that have traditionally relied on manual work, including wiring loom manufacturing and installation. In parallel, the company is working with technology firms and research institutions such as IPAI Heilbronn to accelerate innovation, scale AI solutions, and ensure the responsible use of AI across its global production network.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Snap faces new AI training lawsuit in California

A group of YouTubers has filed a copyright lawsuit against Snap in the US, alleging their videos were used to train AI systems without permission. The case was lodged in a federal court in California and targets AI features used within Snapchat.

The creators claim that Snap relied on large-scale video-language datasets intended initially for academic research. According to the filing in California, access to the material required bypassing YouTube safeguards and license restrictions on commercial use.

The lawsuit in the US seeks statutory damages and a permanent injunction to block further use of the content. The case is led by creators behind the h3h3 channel, alongside two smaller US-based golf channels.

The action adds Snap to a growing list of tech companies facing similar claims in the US. Courts in California and elsewhere continue to weigh how copyright law applies to AI training practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot