Ontario updates deidentification guidelines for safer data use

Ontario’s privacy watchdog has released an expanded set of deidentification guidelines to help organisations protect personal data while enabling innovation. The 100-page document from the Office of the Information and Privacy Commissioner (IPC) offers step-by-step advice, checklists and examples.

The update modernises the 2016 version to reflect global regulatory changes and new data protection practices. She emphasised that the guidelines aim to help organisations of all sizes responsibly anonymise data while maintaining its usefulness for research, AI development and public benefit.

Developed through broad stakeholder consultation, the guidelines were refined with input from privacy experts and the Canadian Anonymization Network. The new version responds to industry requests for more detailed, operational guidance.

Although the guidelines are not legally binding, experts said following them can reduce liability risks and strengthen compliance with privacy laws. The IPC hopes they will serve as a practical reference for executives and data officers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

French lawmakers advance plan to double digital services tax on Big Tech

France’s National Assembly has voted to raise its digital services tax on major tech firms such as Google, Apple, Meta and Amazon from 3% to 6%, despite government warnings that the move could trigger US trade retaliation.

Economy Minister Roland Lescure said the increase would be ‘disproportionate’, cautioning that it could invite equally strong countermeasures from Washington. Lawmakers had initially proposed a 15% levy in response to former US President Donald Trump’s tariff threats, but scaled back amid opposition from industry and the government.

The amendment still requires final approval in next week’s budget vote and then in the French Senate. The proposal also raises the global revenue threshold for companies subject to the digital services tax from €750 million to €2 billion, aiming to shield smaller domestic firms.

John Murphy of the US Chamber of Commerce criticised the plan, arguing it solely targets American companies. Lawmaker Charles Sitzenstuhl, from President Emmanuel Macron’s party, stressed that ‘the objective of this tax was not to harm the United States in any way’, addressing US officials following the vote.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI Foundation to fund global health and AI safety projects

OpenAI has finalised its recapitalisation, simplifying its structure while preserving its core mission. The new OpenAI Foundation controls OpenAI Group PBC and holds about $130 billion in equity, making it one of history’s best-funded philanthropies.

The Foundation will receive further ownership as OpenAI’s valuation grows, ensuring its financial resources expand alongside the company’s success. Its mission remains to ensure that artificial general intelligence benefits all of humanity.

The more the business prospers, the greater the Foundation’s capacity to fund global initiatives.

An initial $25 billion commitment will focus on two core areas: advancing healthcare breakthroughs and strengthening AI resilience. Funds will go toward open-source health datasets, medical research, and technical defences to make AI systems safer and more reliable.

The initiative builds on OpenAI’s existing People-First AI Fund and reflects recommendations from its Nonprofit Commission.

The recapitalisation follows nearly a year of discussions with the Attorneys General of California and Delaware, resulting in stronger governance and accountability. With this structure, OpenAI aims to advance science, promote global cooperation, and share AI benefits broadly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA expands open-source AI models to boost global innovation

The US tech giant, NVIDIA, has released open-source AI models and data tools across language, biology and robotics to accelerate innovation and expand access to cutting-edge research.

New model families, Nemotron, Cosmos, Isaac GR00T and Clara, are designed to empower developers to build intelligent agents and applications with enhanced reasoning and multimodal capabilities.

The company is contributing these open models and datasets to Hugging Face, further solidifying its position as a leading supporter of open research.

Nemotron models improve reasoning for digital AI agents, while Cosmos and Isaac GR00T enable physical AI and robotic systems to perform complex simulations and behaviours. Clara advances biomedical AI, allowing scientists to analyse RNA, generate 3D protein structures and enhance medical imaging.

Major industry partners, including Amazon Robotics, ServiceNow, Palantir and PayPal, are already integrating NVIDIA’s technologies to develop next-generation AI agents.

An initiative that reflects NVIDIA’s aim to create an open ecosystem that supports both enterprise and scientific innovation through accessible, transparent and responsible AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Labels press platforms to curb AI slop and protect artists

Luke Temple woke to messages about a new Here We Go Magic track he never made. An AI-generated song appeared on the band’s Spotify, Tidal, and YouTube pages, triggering fresh worries about impersonation as cheap tools flood platforms.

Platforms say defences are improving. Spotify confirmed the removal of the fake track and highlighted new safeguards against impersonation, plus a tool to flag mismatched releases pre-launch. Tidal said it removed the song and is upgrading AI detection. YouTube did not comment.

Industry teams describe a cat-and-mouse race. Bad actors exploit third-party distributors with light verification, slipping AI pastiches into official pages. Tools like Suno and Udio enable rapid cloning, encouraging volume spam that targets dormant and lesser-known acts.

Per-track revenue losses are tiny, reputational damage is not. Artists warn that identity theft and fan confusion erode trust, especially when fakes sit beside legitimate catalogues or mimic deceased performers. Labels caution that volume is outpacing takedowns across major services.

Proposed fixes include stricter distributor onboarding, verified artist controls, watermark detection, and clear AI labels for listeners. Rights holders want faster escalation and penalties for repeat offenders. Musicians monitor profiles and report issues, yet argue platforms must shoulder the heavier lift.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA and Nokia join forces to build the AI platform for 6G

Nokia and NVIDIA have announced a $1 billion partnership to develop an AI-powered platform that will drive the transition from 5G to 6G networks.

The collaboration will create next-generation AI-RAN systems, combining computing, sensing and connectivity to transform how the US mobile networks process data and deliver services.

However, this partnership marks a strategic step in both companies’ ambition to regain global leadership in telecommunications.

By integrating NVIDIA’s new Aerial RAN Computer and Nokia’s AI-RAN software, operators can upgrade existing networks through software updates instead of complete infrastructure replacements.

T-Mobile US will begin field tests in 2026, supported by Dell’s PowerEdge servers.

NVIDIA’s investment and collaboration with Nokia aim to strengthen the foundation for AI-native networks that can handle the rising demand from agentic, generative and physical AI applications.

These networks are expected to support future 6G use cases, including drones, autonomous vehicles and advanced augmented reality systems.

Both companies see AI-RAN as the next evolution of wireless connectivity, uniting data processing and communication at the edge for greater performance, energy efficiency and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Most Greeks have never used AI at work

A new Focus Bari survey shows that AI is still unfamiliar territory for most Greeks.

Although more than eight in ten have heard of AI, 68 percent say they have never used it professionally. The study highlights that Greece integrates AI into its workplace more slowly than many other countries.

The survey covered 21 nations and found that 83 percent of Greeks know about AI, compared with 17 percent who do not. Only 35 percent feel well-informed, while about one in three admits to knowing little about the technology.

Similar trends appear worldwide, with Switzerland, Mexico, and Romania leading in AI awareness, while countries like Nigeria, Japan, and Australia show limited familiarity.

Globally, almost half of respondents use AI in their everyday lives, yet only one in three applies it in their work. In Greece, that gap remains wide, suggesting that AI is still seen as a distant concept rather than a professional tool.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO surveys women on AI fairness and safety

UNESCO’s Office for the Caribbean has launched a regional survey examining gender and AI, titled Perception of AI Fairness and Online Safety among Women and Girls in the Caribbean. The initiative addresses the lack of data on how women and girls experience technology, AI, and online violence in the region.

Results will guide policy recommendations to promote human rights and safer digital environments.

The 2025 survey is part of a broader UNESCO effort to understand AI’s impact on gender equality. It covers gender-based online violence, generative AI’s implications for privacy, and potential biases in large AI models.

The findings will be used to develop a regional policy brief compared with global data.

UNESCO encourages participation from women and girls across the Caribbean, highlighting that community input is vital for shaping effective AI policies. A one-day workshop on 10 December 2025 will equip young women with skills to navigate AI safely.

The initiative aims to position the Caribbean as a leader in ensuring AI respects dignity, equality, and human rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Adobe launches AI Assistant to simplify creative design

Adobe has launched a new AI Assistant in Express, enabling users to create and edit content from concept to completion in minutes. The tool understands design context and lets users create on-brand visuals by describing their ideas.

Users can seamlessly adjust fonts, images, backgrounds, and other elements while keeping the rest of the design intact.

The AI Assistant integrates generative AI models with Adobe’s professional tools, turning templates into conversational canvases. Users can make targeted edits, replace objects, or transform designs without starting over.

The assistant also interprets subjective requests, suggesting creative options and offering contextual prompts to refine results efficiently, enhancing both speed and quality of content creation.

Adobe Express will extend the AI Assistant with enterprise-grade features, including template locking, batch creation, and brand consistency tools. Early adopters report that non-designers can now produce professional visuals quickly, while experienced designers save time on routine tasks.

Organisations can expect improved collaboration, efficiency, and consistency across content supply chains.

The AI Assistant beta is currently available to Adobe Express Premium customers on desktop, with full availability planned for all users via the Firefly generative credit system. Adobe stresses that AI enhances creativity, respects creators’ rights, and supports responsible generative AI use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Rare but real, mental health risks at ChatGPT scale

OpenAI says a small share of ChatGPT users show possible signs of mental health emergencies each week, including mania, psychosis, or suicidal thoughts. The company estimates 0.07 percent and says safety prompts are triggered. Critics argue that small percentages scale at ChatGPT’s size.

A further 0.15 percent of weekly users discuss explicit indicators of potential suicidal planning or intent. Updates aim to respond more safely and empathetically, and to flag indirect self-harm signals. Sensitive chats can be routed to safer models in a new window.

More than 170 clinicians across 60 countries advise OpenAI on risk cues and responses. Guidance focuses on encouraging users to seek real-world support. Researchers warn vulnerable people may struggle to act on on-screen warnings.

External specialists see both value and limits. AI may widen access when services are stretched, yet automated advice can mislead. Risks include reinforcing delusions and misplaced trust in authoritative-sounding output.

Legal and public scrutiny is rising after high-profile cases linked to chatbot interactions. Families and campaigners want more transparent accountability and stronger guardrails. Regulators continue to debate transparency, escalation pathways, and duty of care.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!