ACCC lawsuit triggers Microsoft’s rethink and apology on Copilot subscription communications

Microsoft apologised after Australia’s regulator said it steered Microsoft 365 users to pricier Copilot plans while downplaying cheaper Classic tiers. The move follows APAC price-rise emails and confusion over Personal and Family increases.

ACCC officials said communications may have denied customers informed choices by omitting equivalent non-AI plans. Microsoft acknowledged it could have been clearer and accepted that Classic alternatives might have saved some subscribers money under the October 2024 changes.

Redmond is offering affected customers refunds for the difference between Copilot and Classic tiers and has begun contacting subscribers in Australia and New Zealand. The company also re-sent its apology email after discovering a broken link to the Classic plans page.

Questions remain over whether similar remediation will extend to Malaysia, Singapore, Taiwan, and Thailand, which also saw price hikes earlier this year. Consumer groups are watching for consistent remedies and plain-English disclosures across all impacted markets.

Regulators have sharpened scrutiny of dark patterns, bundling, and AI-linked upsells as digital subscriptions proliferate. Clear side-by-side plan comparisons and functional disclosures about AI features are likely to become baseline expectations for compliance and customer trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO launches Beruniy Prize to promote ethical AI innovation

UNESCO and the Uzbekistan Arts and Culture Development Foundation have introduced the UNESCO–Uzbekistan Beruniy Prize for Scientific Research on the Ethics of Artificial Intelligence.

The award, presented at the 43rd General Conference in Samarkand, recognises global leaders whose research and policy efforts promote responsible and human-centred AI innovation. Each laureate received $30,000, a Beruniy medal, and a certificate.

Professor Virgilio Almeida was honoured for advancing ethical, inclusive AI and democratic digital governance. Human rights expert Susan Perry and computer scientist Claudia Roda were recognised for promoting youth-centred AI ethics that protect privacy, inclusion, and fairness.

The Institute for AI International Governance at Tsinghua University in China also received the award for promoting international cooperation and responsible AI policy.

UNESCO’s Audrey Azoulay and Gayane Uemerova emphasised that ethics should guide technology to serve humanity, not restrict it. Laureates echoed the need for shared moral responsibility and global cooperation in shaping AI’s future.

The new Beruniy Prize reaffirms that ethics form the cornerstone of progress. By celebrating innovation grounded in empathy, inclusivity, and accountability, UNESCO aims to ensure AI remains a force for peace, justice, and sustainable development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How Google uses AI to support teachers and inspire students

Google is redefining education with AI designed to enhance learning, rather than replace teachers. The company has unveiled new tools grounded in learning science to support both educators and students, aiming to make learning more effective, efficient and engaging.

Through its Gemini platform, users can follow guided learning paths that encourage discovery rather than passive answers.

YouTube and Search now include conversational features that allow students to ask questions as they learn, while NotebookLM can transform personal materials into quizzes or immersive study aids.

Instructors can also utilise Google Classroom’s free AI tools for lesson planning and administrative support, thereby freeing up time for direct student engagement.

Google emphasises that its goal is to preserve the human essence of education while using AI to expand understanding. The company also addresses challenges linked to AI in learning, such as cheating, fairness, accuracy and critical thinking.

It is exploring assessment models that cannot be easily replicated by AI, including debates, projects, and oral examinations.

The firm pledges to develop its tools responsibly by collaborating with educators, parents and policymakers. By combining the art of teaching with the science of AI-driven learning, Google seeks to make education more personal, equitable and inspiring for all.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta rejects French ruling over gender bias in Facebook job ads

Meta has rejected a decision by France’s Défenseur des Droits that found its Facebook algorithm discriminates against users based on gender in job advertising. The case was brought by Global Witness and women’s rights groups Fondation des Femmes and Femmes Ingénieures, who argued that Meta’s ad system violates French anti-discrimination law.

The regulator ruled that Facebook’s system treats users differently according to gender when displaying job opportunities, amounting to indirect discrimination. It recommended Meta Ireland and Facebook France make adjustments within three months to prevent gender-based bias.

A Meta spokesperson said the company disagrees with the finding and is ‘assessing its options.’ The complainants welcomed the decision, saying it confirms that platforms are not exempt from laws prohibiting gender-based distinctions in recruitment advertising.

Lawyer Josephine Shefet, representing the groups, said the ruling marks a key precedent. ‘The decision sends a strong message to all digital platforms: they will be held accountable for such bias,’ she said.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UNESCO adopts first global ethical framework for neurotechnology

UNESCO has approved the world’s first global framework on the ethics of neurotechnology, setting new standards to ensure that advances in brain science respect human rights and dignity. The Recommendation, adopted by member states and entering into force on 12 November, establishes safeguards to ensure neurotechnological innovation benefits those in need without compromising mental privacy.

Launched in 2019 under Director-General Audrey Azoulay, the initiative builds on UNESCO’s earlier work on AI ethics. Azoulay described neurotechnology as a ‘new frontier of human progress’ that demands strict ethical boundaries to protect the inviolability of the human mind. The framework reflects UNESCO’s belief that technology should serve humanity responsibly and inclusively.

Neurotechnology, which enables direct interaction with the nervous system, is rapidly expanding, with investment in the sector rising by 700% between 2014 and 2021. While medical uses, such as deep brain stimulation and brain–computer interfaces, offer hope for people with Parkinson’s disease or disabilities, consumer devices that read neural data pose serious privacy concerns. Many users unknowingly share sensitive information about their emotions or mental states through everyday gadgets.

The Recommendation calls on governments to regulate these technologies, ensure they remain accessible, and protect vulnerable groups, especially children and workers. It urges bans on non-therapeutic use in young people and warns against monitoring employees’ mental activity or productivity without explicit consent.

UNESCO also stresses the need for transparency and better regulation of products that may alter behaviour or foster addiction.

Developed after consultations with over 8,000 contributors from academia, industry, and civil society, the framework was drafted by an international group of experts led by scientists Hervé Chneiweiss and Nita Farahany. UNESCO will now help countries translate the principles into national laws, as it has done with its 2021 AI ethics framework.

The Recommendation’s adoption, finalised at the General Conference in Samarkand, marks a new milestone in the global governance of emerging technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI outlines roadmap for AI safety, accountability and global cooperation

New recommendations have been published by OpenAI for managing rapid advances in AI, stressing the need for shared safety standards, public accountability, and resilience frameworks.

The company warned that while AI systems are increasingly capable of solving complex problems and accelerating discovery, they also pose significant risks that must be addressed collaboratively.

According to OpenAI, the next few years could bring systems capable of discoveries once thought centuries away.

The firm expects AI to transform health, materials science, drug development and education, while acknowledging that economic transitions may be disruptive and could require a rethinking of social contracts.

To ensure safe development, OpenAI proposed shared safety principles among frontier labs, new public oversight mechanisms proportional to AI capabilities, and the creation of a resilience ecosystem similar to cybersecurity.

It also called for regular reporting on AI’s societal impact to guide evidence-based policymaking.

OpenAI reiterated that the goal should be to empower individuals by making advanced AI broadly accessible, within limits defined by society, and to treat access to AI as a foundational public utility in the years ahead.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LIBE backs new Europol Regulation despite data protection and discrimination warnings

The European Parliament’s civil liberties committee (LIBE) voted to endorse a new Europol Regulation, part of the ‘Facilitators Package’, by 59–10 with four abstentions.

Rights groups and the European Data Protection Supervisor had urged MEPs to reject the proposal, arguing the law fuels discrimination and grants Europol and Frontex unprecedented surveillance capabilities with insufficient oversight.

If approved in plenary later this month, the reform would grant Europol broader powers to collect, process and share data, including biometrics such as facial recognition, and enable exchanges with non-EU states.

Campaigners note the proposal advanced without an impact assessment, contrary to the Commission’s Better Regulation guidance.

Civil society groups warn that the changes risk normalising surveillance in migration management. Access Now’s Caterina Rodelli said MEPs had ‘greenlighted the European Commission’s long-term plan to turn Europe into a digital police state’. At the same time, Equinox’s Sarah Chander called the vote proof the EU has ‘abandoned’ humane, evidence-based policy.

EDRi’s Chloé Berthélémy said the reform legitimises ‘unaccountable and opaque data practices’, creating a ‘data black hole’ that undermines rights and the rule of law. More than 120 organisations called on MEPs to reject the text, arguing it is ‘unlawful, unsafe, and unsubstantiated’.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Snap brings Perplexity’s answer engine into Chat for nearly a billion users

Starting in early 2026, Perplexity’s AI will be integrated into Snapchat’s Chat, accessible to nearly 1 billion users. Snapchatters can ask questions and receive concise, cited answers in-app. Snap says the move reinforces its position as a trusted, mobile-first AI platform.

Under the deal, Perplexity will pay Snap $400 million in cash and equity over a one-year period, tied to the global rollout. Revenue contribution is expected to begin in 2026. Snap points to its 943 million MAUs and reaches over 75% of 13–34-year-olds in 25+ countries.

Perplexity frames the move as meeting curiosity where it occurs, within everyday conversations. Evan Spiegel says Snap aims to make AI more personal, social, and fun, woven into friendships and conversations. Both firms pitch the partnership as enhancing discovery and learning on Snapchat.

Perplexity joins, rather than replaces, Snapchat’s existing My AI. Messages sent to Perplexity will inform personalisation on Snapchat, similar to My AI’s current behaviour. Snap claims the approach is privacy-safe and designed to provide credible, real-time answers from verifiable sources.

Snap casts this as a first step toward a broader AI partner platform inside Snapchat. The companies plan creative, trusted ways for leading AI providers to reach Snap’s global community. The integration aims to enable seamless, in-chat exploration while keeping users within Snapchat’s product experience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How GEMS turns Copilot time savings into personalised teaching at scale

GEMS Education is rolling out Microsoft 365 Copilot to cut admin and personalise learning, with clear guardrails and transparency. Teachers spend less time on preparation and more time with pupils. The aim is augmentation, not replacement.

Copilot serves as a single workspace for plans, sources, and visuals. Differentiated materials arrive faster for struggling and advanced learners. More time goes to feedback and small groups.

Student projects are accelerating. A Grade 8 pupil built a smart-helmet prototype, using AI to guide circuitry, code, and documentation. The idea to build functionally moved quickly.

The School of Research and Innovation opened in August 2025 as a living lab, hosting educator training, research partners, and student incubation. A Microsoft-backed stack underpins the campus.

Teachers are co-creating lightweight AI agents for curriculum and analytics. Expert oversight and safety patterns stay central. The focus is on measurable time savings and real-world learning.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Material-level AI emerges in MIT–DeRucci sleep science collaboration

MIT’s Sensor and Ambient Intelligence group, led by Joseph Paradiso, unveiled ‘FiberCircuits’, a smart-fibre platform co-developed with DeRucci. It embeds sensing, edge inference, and feedback directly in fibres to create ‘weavable intelligence’. The aim is natural, low-intrusion human–computer interaction.

Teams embedded AI micro-sensors and sub-millimetre ICs to capture respiration, movement, skin conductance, and temperature, running tinyML locally for privacy. Feedback via light, sound, or micro-stimulation closes the loop while keeping power and data exposure low.

Sleep science prototypes included a mattress with distributed sensors for posture recognition, an eye mask combining PPG and EMG, and an IMU-enabled pillow. Prototypes were used to validate signal parsing and human–machine coupling across various sleep scenarios.

Edge-first design places most inference on the fibre to protect user data and reduce interference, according to DeRucci’s CTO, Chen Wenze. Collaboration covered architecture, algorithms, and validation, with early results highlighting comfort, durability, and responsiveness suitable for bedding.

Partners plan to expand cohorts and scenarios into rehabilitation and non-invasive monitoring, and to release selected algorithms and test protocols. Paradiso framed material-level intelligence as a path to gentler interfaces that blend into everyday environments.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!