Italy closes Google probe after consent changes

Italy has closed its investigation into Google after the company agreed to adjust how it requests user consent for personal data use. Regulators had accused Google of presenting unclear and potentially misleading choices when connecting users to its services.

The authority said Google will now offer clearer explanations about how consent affects data processing. Updates will outline where personal information may be combined or used across the company’s wider service ecosystem.

Officials launched the probe in July 2024, arguing Google’s approach could amount to aggressive commercial practice. Revised consent flows were accepted as sufficient remedies, leading to the closure of the case without financial penalties.

The Italian competition regulator indicated that transparency improvements were central to compliance. Similar scrutiny continues across Europe as regulators assess how large technology firms obtain permission for extensive data handling.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New NVIDIA model drives breakthroughs in conservation biology

Researchers have introduced a biology foundation model that can recognise over a million species and understand relationships across the animal and plant kingdoms.

BioCLIP 2 was trained on one of the most extensive biological datasets ever compiled, allowing it to identify traits, cluster organisms and reveal patterns that support conservation efforts.

A model that relies on NVIDIA accelerated computing instead of traditional methods and demonstrates what large-scale biological learning can achieve.

Training drew on more than two hundred million images that cover hundreds of thousands of taxonomic classes. The AI model learned how species fit within wider biological hierarchies and how traits differ across age, gender and related groups without explicit guidance.

It even separated diseased leaves from healthy samples, offering a route to improved monitoring of ecosystems and agricultural resilience.

Scientists now plan to expand the project by utilising wildlife digital twins that simulate ecological systems in controlled environments.

Researchers will be able to study species interactions and test scenarios instead of disturbing natural habitats. The approach opens possibilities for richer ecological research and could offer the public immersive ways to view biodiversity from the perspective of different animals.

BioCLIP 2 is available as open-source software and has already attracted strong global interest. Its capabilities indicate a shift toward more advanced biological modelling powered by accelerated computing, providing conservationists and educators with new tools to address long-standing knowledge gaps.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPT‑5 expands research speed and idea generation for scientists

AI technology is increasingly helping scientists accelerate research across fields including biology, mathematics, physics, and computer science. Early GPT‑5 studies show it can synthesise information, propose experiments, and aid in solving long-standing mathematical problems.

Experts note the technology expands the range of ideas researchers can explore and shortens the time to validate results.

Case studies demonstrate tangible benefits: in biology, GPT‑5 helped identify mechanisms in human immune cells within minutes, suggesting experiments that confirmed the results.

In mathematics, GPT‑5 suggested new approaches, and in optimisation, it identified improved solutions later verified by researchers.

These advances reinforce human-led research rather than replacing it.

OpenAI for Science emphasises collaboration between AI and experts. GPT‑5 excels at conceptual literature review, exploring connections across disciplines, and proposing hypotheses for experimental testing.

Its greatest impact comes when researchers guide the process, breaking down problems, critiquing suggestions, and validating outcomes.

Researchers caution that AI does not replace human expertise. Current models aid speed, idea generation, and breadth, but expert oversight is essential to ensure reliable and meaningful scientific contributions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google launches Nano Banana Pro image model

Google has launched Nano Banana Pro, a new image generation and editing model built on Gemini 3 Pro. The upgrade expands Gemini’s visual capabilities inside the Gemini app, Google Ads, Google AI Studio, Vertex AI and Workspace tools.

Nano Banana Pro focuses on cleaner text rendering, richer world knowledge and tighter control over style and layout. Creators can produce infographics, diagrams and character consistent scenes, and refine lighting, camera angle or composition with detailed prompts.

The AI model supports higher resolution visuals, localised text in multiple languages and more accurate handling of complex scripts. Google highlights uses in marketing materials, business presentations and professional design workflows, as partners such as Adobe integrate the model into Firefly and Photoshop.

Users can try Nano Banana Pro through Gemini with usage limits, while paying customers and enterprises gain extended access. Google embeds watermarking and C2PA-style metadata to help identify AI-generated images, foregrounding safety and transparency around synthetic content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Creative industries seek rights protection amid AI surge

British novelists are raising concerns that AI could replace their work, with nearly half saying the technology could ‘entirely replace’ them. The MCTD survey of 332 authors found deep unease about the impact of generative tools trained on vast fiction datasets.

About 97% of novelists expressed intense negativity towards the idea of AI writing complete novels, while around 40% said their income from related work had already suffered. Many authors have reported that their work has been used to train large language models without their permission or payment.

While 80 % agreed AI offers societal benefits, authors called for better protections, including copyright reform and consent-based use of their work. MCTD Executive Director Prof. Gina Neff stressed that creative industries are not expendable in the AI race.

A UK government spokesperson said collaboration between the AI sector and creative industries is vital, with a focus on innovation and protection for creators. But writers say urgent action is needed to ensure their rights are upheld.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU unveils vision for a modern justice system

The European Commission has introduced a new Digital Justice Package designed to guide the EU justice systems into a fully digital era.

A plan that sets out a long-term strategy to support citizens, businesses and legal professionals with modern tools instead of outdated administrative processes. Central objectives include improved access to information, stronger cross-border cooperation and a faster shift toward AI-supported services.

The DigitalJustice@2030 Strategy contains fourteen steps that encourage member states to adopt advanced digital tools and share successful practices.

A key part of the roadmap focuses on expanding the European Legal Data Space, enabling legislation and case law to be accessed more efficiently.

The Commission intends to deepen cooperation by developing a shared toolbox for AI and IT systems and by seeking a unified European solution to cross-border videoconferencing challenges.

Additionally, the Commission has presented a Judicial Training Strategy designed to equip judges, prosecutors and legal staff with the digital and AI skills required to apply the EU digital law effectively.

Training will include digital case management, secure communication methods and awareness of AI’s influence on legal practice. The goal is to align national and EU programmes to increase long-term impact, rather than fragmenting efforts.

European officials argue that digital justice strengthens competitiveness by reducing delays, encouraging transparency and improving access for citizens and businesses.

The package supports the EU’s Digital Decade ambition to make all key public services available online by 2030. It stands as a further step toward resilient and modern judicial systems across the Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI unveils new global group chat experience

Since yesterday, OpenAI has launched group chats worldwide for all ChatGPT users on Free, Go, Plus and Pro plans instead of limiting access to small trial regions.

The upgrade follows a pilot in Japan and New Zealand and marks a turning point in how the company wants people to use AI in everyday communication.

Group chats enable up to twenty participants to collaborate in a shared space, where they can plan trips, co-write documents, or settle disagreements through collective decision-making.

ChatGPT remains available as a partner that contributes when tagged, reacts with emojis and references profile photos instead of taking over the conversation. Each participant keeps private settings and memory, which prevents personal information from being shared across the group.

Users start a session by tapping the people icon and inviting others directly or through a link. Adding someone later creates a new chat, rather than altering the original, which preserves previous discussions intact.

OpenAI presents the feature as a way to turn the assistant into a social environment rather than a solitary tool.

The announcement arrives shortly after the release of GPT-5.1 and follows the introduction of Sora, a social app that encourages users to create videos with friends.

OpenAI views group chats as the first step toward a more active role for AI in real human exchanges where people plan, create and make decisions together.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta to block under-16 Australians from Facebook and Instagram early

Meta is beginning to block users in Australia who it believes are under 16 from using Instagram, Facebook, and Threads, starting 4 December, a week ahead of the government-mandated social media ban.

Last week, Meta sent in-app messages, emails and texts warning affected users to download their data because their accounts will soon be removed. As of 4 December, the company will deactivate existing accounts and block new sign-ups for users under 16.

To appeal the deactivation, targeted users can undergo age verification by providing a ‘video selfie’ to prove they are 16 or older, or by presenting a government-issued ID. Meta says it will ‘review and improve’ its systems, deploying AI-based age-assurance methods to reduce errors.

Observers highlight the risks of false positives in Meta’s age checks. Facial age estimation, conducted through partner company Yoti, has known margins of error.

The enforcement comes amid Australia’s world-first law that bars under-16s from using several major social media platforms, including Instagram, Snapchat, TikTok, YouTube, X and more.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Pennsylvania Senate passes bill to tackle AI-generated CSAM

The Pennsylvania Senate has passed Senate Bill 1050, requiring all individuals classified as mandated reporters to notify authorities of any instance of child sexual abuse material (CSAM) they become aware of, including material produced by a minor or generated using artificial intelligence.

The bill, sponsored by Senators Tracy Pennycuick, Scott Martin and Lisa Baker, addresses the recent rise in AI-generated CSAM and builds upon earlier legislation (Act 125 of 2024 and Act 35 of 2025) that targeted deepfakes and sexual deepfake content.

Supporters argue the bill strengthens child protection by closing a legal gap: while existing laws focused on CSAM involving real minors, the new measure explicitly covers AI-generated material. Senator Martin said the threat from AI-generated images is ‘very real’.

From a tech policy perspective, this law highlights how rapidly evolving AI capabilities, especially around image synthesis and manipulation, are pushing lawmakers to update obligations for reporting, investigation and accountability.

It raises questions around how institutions, schools and health-care providers will adapt to these new responsibilities and what enforcement mechanisms will look like.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI in healthcare gains regulatory compass from UK experts

Professor Alastair Denniston has outlined the core principles for regulating AI in healthcare, describing AI as the ‘X-ray moment’ of our time.

Like previous innovations such as MRI scanners and antibiotics, AI has the potential to improve diagnosis, treatment and personalised care dramatically. Still, it also requires careful oversight to ensure patient safety.

The MHRA’s National Commission on the Regulation of AI in Healthcare is developing a framework based on three key principles. The framework must be safe, ensuring proportionate regulation that protects patients without stifling innovation.

It must be fast, reducing delays in bringing beneficial technologies to patients and supporting small innovators who cannot endure long regulatory timelines. Ultimately, it must be trusted, with transparent processes that foster confidence in AI technologies today and in the future.

Professor Denniston emphasises that AI is not a single technology but a rapidly evolving ecosystem. The regulatory system must keep pace with advances while allowing the NHS to harness AI safely and efficiently.

Just as with earlier medical breakthroughs, failure to innovate can carry risks equal to the dangers of new technologies themselves.

The National Commission will soon invite the public to contribute their views through a call for evidence.

Patients, healthcare professionals, and members of the public are encouraged to share what matters to them, helping to shape a framework that balances safety, speed, and trust while unlocking the full potential of AI in the NHS.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!