Google enters a new frontier with Gemini 3

A new phase of its AI strategy has begun for Google with the release of Gemini 3, which arrives as the company’s most advanced model to date.

The new system prioritises deeper reasoning and more subtle multimodal understanding, enabling users to approach difficult ideas with greater clarity instead of relying on repetitive prompting. It marks a major step for Google’s long-term project to integrate stronger intelligence into products used by billions.

Gemini 3 Pro is already available in preview across the Gemini app, AI Mode in Search, AI Studio, Vertex AI and Google’s new development platform known as Antigravity.

A model that performs at the top of major benchmarks in reasoning, mathematics, tool use and multimodal comprehension, offering substantial improvements compared with Gemini 2.5 Pro.

Deep Think mode extends the model’s capabilities even further, reaching new records on demanding academic and AGI-oriented tests, although Google is delaying wider release until additional safety checks conclude.

Users can rely on Gemini 3 to learn complex topics, analyse handwritten material, decode long academic texts or translate lengthy videos into interactive guides instead of navigating separate tools.

Developers benefit from richer interactive interfaces, more autonomous coding agents and the ability to plan tasks over longer horizons.

Google Antigravity enhances this shift by giving agents direct control of the development environment, allowing them to plan, write and validate code independently while remaining under human supervision.

Google emphasises that Gemini 3 is its most extensively evaluated model, supported by independent audits and strengthened protections against manipulation. The system forms the foundation for Google’s next era of agentic, personalised AI and will soon expand with additional models in the Gemini 3 series.

The company expects the new generation to reshape how people learn, build and organise daily tasks instead of depending on fragmented digital services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The future of EU data protection under the Omnibus Package

Introduction and background information

The Commission claims that the Omnibus Package aims to simplify certain European Union legislation to strengthen the Union’s long-term competitiveness. A total of six omnibus packages have been announced in total.

The latest (no. 4) targets small mid-caps and digitalisation. Package no. 4 covers data legislation, cookies and tracking technologies (i.e. the General Data Protection Regulation (GDPR) and ePrivacy Directive (ePD)), as well as cybersecurity incident reporting and adjustments to the Artificial Intelligence Act (AIA).

That ‘simplification’ is part of a broader agenda to appease business, industry and governments who argue that the EU has too much red tape. In her September 2025 speech to German economic and business associations, Ursula von der Leyen sided with industry and stated that simplification is ‘the only way to remain competitive’.

As for why these particular laws were selected, the rationale is unclear. One stated motivation for including the GDPR is its mention in Mario Draghi’s 2024 report on ‘The Future of European Competitiveness’.

Draghi, the former President of the European Central Bank, focused on innovation in advanced technologies, decarbonisation and competitiveness, as well as security. Yet, the report does not outline any concrete way in which the GDPR allegedly reduces competitiveness or requires revision.

The GDPR appears only twice in the report. First, as a brief reference to regulatory fragmentation affecting the reuse of sensitive health data across Member States (MS).

Second, in the concluding remarks, it is claimed that ‘the GDPR in particular has been implemented with a large degree of fragmentation which undermines the EU’s digital goals’. There is, however, no explanation of this ‘large fragmentation’, no supporting evidence, and no dedicated section on the GDPR as its first mention being buried in the R&I (research and innovation) context.

It is therefore unclear what legal or analytical basis the Commission relies on to justify including the GDPR in this simplification exercise.

The current debate

There are two main sides to this Omnibus, which are the privacy forward and the competitive/SME side. The two need not be mutually exclusive, but civil society warns that ‘simplification’ risks eroding privacy protection. Privacy advocates across civil society expressed strong concern and opposition to simplification in their responses to the European Commission’s recent call for evidence.

Industry positions vary in tone and ambition. For example, CrowdStrike calls for greater legal certainty under the Cybersecurity Act, such as making recital 55 binding rather than merely guiding and introducing a one-stop-shop mechanism for incident reporting.

Meta, by contrast, urges the Commission to go beyond ‘easing administrative burdens’, calling for a pause in AI Act enforcement and a sweeping reform of the EU data protection law. On the civil society side, Access Now argues that fundamental rights protections are at stake.

It warns that any reduction in consent prompts could allow tracking technologies to operate without users ever being given a real opportunity to refuse. A more balanced, yet cautious line can be found in the EDPB and EDPS joint opinion regarding easing records of processing activities for SMEs.

Similar to the industry, they support reducing administrative burdens, but with the caveat that amendments should not compromise the protection of fundamental rights, echoing key concerns of civil society.

Regarding Member State support, Estonia, France, Austria and Slovenia are firmly against any reopening of the GDPR. By contrast, the Czech Republic, Finland and Poland propose targeted amendments while Germany proposes a more systematic reopening of the GDPR.

Individual Members of the European Parliament have also come out in favour of reopening, notably Aura Salla, a Finnish centre-right MEP who previously headed Meta’s Brussels lobbying office.

Therefore, given the varied opinions, it cannot be said what the final version of the Omnibus would look like. Yet, a leaked draft document of the GDPR’s potential modifications suggests otherwise. Upon examination, it cannot be disputed that the views from less privacy-friendly entities have served as a strong guiding path.

Leaked draft document main changes

The leaked draft introduces several core changes.

Those changes include a new definition of personal and sensitive data, the use of legitimate interest (LI) for AI processing, an intertwining of the ePrivacy Directive (ePD) and GDPR, data breach reforms, a centralised data protection impact assessment (DPIA) whitelist/blacklist, and access rights being conditional on motive for use.

A new definition of personal data

The draft redefines personal data so that ‘information is not personal data for everyone merely because another entity can identify that natural person’. That directly contradicts established EU case law, which holds that if an entity can, with reasonable means, identify a natural person, then the information is personal data, regardless of who else can identify that person.

A new definition of sensitive data

Under current rules, inferred information can be sensitive personal data. If a political opinion is inferred from browsing history, that inference is protected.

The draft would narrow this by limiting sensitive data to information that ‘directly reveals’ special categories (political views, health, religion, sexual orientation, race/ethnicity, trade union membership). That would remove protection from data derived through profiling and inference.

Detected patterns, such as visits to a health clinic or political website, would no longer be treated as sensitive, and only explicit statements similar to ‘I support the EPP’ or ‘I am Muslim’ would remain covered.

Intertwining article 5(3) ePD and the GDPR

Article 5(3) ePD is effectively copied into the GDPR as a new Article 88a. Article 88a would allow the processing of personal data ‘on or from’ terminal equipment where necessary for transmission, service provision, creating aggregated information (e.g. statistics), or for security purposes, alongside the existing legal bases in Articles 6(1) and 9(2) of the GDPR.

That generates confusion about how these legal bases interact, especially when combined with AI processing under LI. Would this mean that personal data ‘on or from’ a terminal equipment may be allowed if it is done by AI?

The scope is widened. The original ePD covered ‘storing of information, or gaining access to information already stored, in the terminal equipment’. The draft instead regulates any processing of personal data ‘on or from’ terminal equipment. That significantly expands the ePD’s reach and would force controllers to reassess and potentially adapt a broad range of existing operations.

LI for AI personal data processing

A new Article 88c GDPR, ‘Processing in the context of the development and operation of AI’, would allow controllers to rely on LI to process personal data for AI processing. That move would largely sideline data subject control. Businesses could train AI systems on individuals’ images, voices or creations without obtaining consent.

A centralised data breach portal, deadline extension and change in threshold reporting

The draft introduces three main changes to data breach reporting.

  • Extending the notification deadline from 72 to 96 hours, giving privacy teams more time to investigate and report.
  • A single EU-level reporting portal, simplifying reporting for organisations active in multiple MS.
  • Raising the notification threshold when the rights and freedoms of data subjects are at ‘risk’ to ‘high risk’.

The first two changes are industry-friendly measures designed to streamline operations. The third is more contentious. While industry welcomes fewer reporting obligations, civil society warns that a ‘high-risk’ threshold could leave many incidents unreported. Taken together, these reforms simplify obligations, albeit at the potential cost of reducing transparency.

Centralised processing activity (PA) list requiring a DPIA

This is another welcome change as it would clarify which PAs would automatically require a DPIA and which would not. The list would be updated every 3 years.

What should be noted here is that some controllers may not see their PA on this list and assume or argue that a DPIA is not required. Therefore, the language on this should make it clear that it is not a closed list.

Access requests denials

Currently, a data subject may request a copy of their data regardless of the motive. Under the draft, if a data subject exploits the right of access by using that material against the controller, the controller may charge or refuse the request.

That is problematic for the protection of rights as it impacts informational self-determination and weakens an important enforcement tool for individuals.

For more information, an in depth analysis by noyb has been carried out which can be accessed here.

The Commission’s updated version

As of the 19th of November, the Commission has published its digital omnibus proposal. Most of the amendments in the leaked draft have remained. One of the measures dropped is the definition of sensitive data. This means that inferences could amount to sensitive data.

However, the final document keeps three key changes that erode fundamental rights protections:

  • Changing the definition of personal data to be a subjective and narrow one;
  • An intertwining of the ePD and the GDPR which also allows for processing based on aggregated and security purposes;
  • LI being relied upon as a legal basis for AI processing of personal data.

Still, positive changes remain:

  • A single-entry point for EU data breaches. This is a welcomed measure which streamlines reporting and appease some compliance obligations for EU businesses.
  • Another welcomed measure is the white/black-list of processing activities which would or would not require a DPIA. The same note remains with what the language of this text will look like.

Overall, these two measures are examples of simplification measures with concrete benefits.

Now, the European Parliament has the task to dissect this proposal and debate on what to keep and what to reject. Some experts have suggested that this may take minimum 1 year to accomplish given how many changes there are, but this is not certain.

We can also expect a revised version of the Commission’s proposal to be published due to the errors in language, numbering and article referencing that have been observed. This does not mean any content changes.

Final remarks

Simplification in itself is a good idea, and businesses need to have enough freedom to operate without being suffocated with red tape. However, changing a cornerstone of data protection law to such an extent that it threatens fundamental rights protections is just cause for concern.

Alarms have already been raised after the previous Omnibus package on green due diligence obligations was scrapped. We may now be witnessing a similar rollback, this time targeting digital rights.

As a result, all eyes are on 19 November, a date that could reshape not only the EU privacy standards but also global data protection norms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI energy demand strains electrical grids

Microsoft CEO Satya Nadella recently delivered a key insight, stating that the biggest hurdle to deploying new AI solutions is now electrical power, not chip supply. The massive energy requirements for running large language models (LLMs) have created a critical bottleneck for major cloud providers.

Nadella specified that Microsoft currently has a ‘bunch of chips sitting in inventory’ that cannot be plugged in and utilised. The problem is a lack of ‘warm shells’, meaning data centre buildings that are fully equipped with the necessary power and cooling capacity.

The escalating power requirements of AI infrastructure are placing extreme pressure on utility grids and capacity. Projections from the Lawrence Berkeley National Laboratory indicate that US data centres could consume up to 12 percent of the nation’s total electricity by 2028.

The disclosure should serve as a warning to investors, urging them to evaluate the infrastructure challenges alongside AI’s technological promise. This energy limitation could create a temporary drag on the sector, potentially slowing the massive projected returns on the $5 trillion investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Old laws now target modern tracking technology

Class-action privacy litigation continues to grow in frequency, repurposing older laws to address modern data tracking technologies. Recent high-profile lawsuits have applied the California Invasion of Privacy Act and the Video Privacy Protection Act.

A unanimous jury verdict recently found Meta Platforms violated CIPA Section 632 (which is now under appeal) by eavesdropping on users’ confidential communications without consent. The court ruled that Meta intentionally used its SDK within a sexual health app, Flo, to intercept sensitive real-time user inputs.

That judgement suggests an electronic device under the statute need not be physical, with a user’s phone qualifying as the requisite device. The legal success in these cases highlights a significant, rising risk for all companies utilising tracking pixels and software development kits (SDKs).

Separately, the VPPA has found new power against tracking pixels in the case of Jancik v. WebMD concerning video-viewing data. The court held that a consumer need not pay for a video service but can subscribe by simply exchanging their email address for a newsletter.

Companies must ensure their privacy policies clearly disclose all such tracking conduct to obtain explicit, valid consent. The courts are taking real-time data interception seriously, noting intentionality may be implied when a firm fails to stem the flow of sensitive personally identifiable information.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ALX and Anthropic partner with Rwanda on AI education

A landmark partnership between ALX, Anthropic, and the Government of Rwanda has launched a major AI learning initiative across Africa.

The program introduces ‘Chidi’, an AI-powered learning companion built on Anthropic’s Claude model. Instead of providing direct answers, the system is designed to guide learners through critical thinking and problem-solving, positioning African talent at the centre of global tech innovation.

An initiative, described as one of the largest AI-enhanced education deployments on the continent, that will see Chidi integrated into Rwanda’s public education system. A pilot phase will involve up to 2,000 educators and select civil servants.

According to the partners, the collaboration aims to ensure Africa’s youth become creators of AI technology instead of remaining merely consumers of it.

A three-way collaboration that unites ALX’s training infrastructure, Anthropic’s AI technology, and Rwanda’s progressive digital policy. The working group, the researchers noted, will document insights to inform Rwanda’s national AI policy.

The initiative sets a new standard for inclusive, AI-powered learning, with Rwanda serving as a launch hub for future deployments across the continent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches WeatherNext 2 for faster forecasts

WeatherNext 2, Google’s latest AI forecasting model, offers significantly faster and more precise weather predictions. Developed by DeepMind and Google Research, the model produces forecasts eight times faster with hourly resolution, aiding decisions from supply chains to daily commutes.

The model generates hundreds of weather scenarios from a single starting point, enabling agencies and businesses to plan for all potential outcomes, including extreme events.

Its predictions outperform the previous WeatherNext model on 99.9% of variables, providing more accurate forecasts for temperature, wind, humidity, and other factors.

A Functional Generative Network (FGN) powers WeatherNext 2, allowing it to predict both individual weather elements and complex interconnected systems. The system enables applications such as forecasting regional heatwaves or wind farm output, while keeping predictions physically realistic.

Forecast data is available through Google Earth Engine, BigQuery, and an early access programme on Vertex AI, while WeatherNext 2 now powers Search, Gemini, Pixel Weather, and Google Maps’ Weather API.

Google plans to expand access further, supporting researchers, developers, and businesses to make informed decisions and accelerate scientific discovery.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Singapore’s HTX boosts Home Team AI capabilities with Mistral partnership

HTX has signed a new memorandum of understanding with France’s Mistral AI to accelerate joint research on large language and multimodal models for public safety. The partnership will expand into embodied AI, video analytics, cybersecurity, and automated fire safety systems.

The deal builds on earlier work co-developing Phoenix, HTX’s internal LLM series, and a Home Team safety benchmark for evaluating model behaviour. The organisations will now collaborate on specialised models for robots, surveillance platforms, and cyber defence tools.

Planned capabilities include natural-language control of robotic systems, autonomous navigation in unfamiliar environments, and object retrieval. Video AI tools will support predictive tracking and proactive crime alerts across multiple feeds.

Cybersecurity applications include automated architecture reviews and on-demand vulnerability testing. Fire safety tools will use multimodal comprehension to analyse architectural plans and flag compliance issues without manual checks.

The partnership forms part of the HTxAI movement, which aims to strengthen Home Team AI capacity through research collaborations with industry and academia. Mistral’s flagship models, Mistral Medium 3.1 and Magistral, are currently among the top performers in multilingual and multimodal benchmarks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare buys AI platform Replicate

Cloudflare has agreed to purchase Replicate, a platform simplifying the deployment and running of AI models. The technology aims to cut down on GPU hardware and infrastructure needs typically required for complex AI.

The acquisition will integrate Replicate’s extensive library of over 50,000 AI models into the Cloudflare platform. Developers can then access and deploy any AI model globally using just a single line of code for rapid implementation.

Matthew Prince, Cloudflare’s chief executive, stated the acquisition will make his company the ‘most seamless, all-in-one shop for AI development’. The move abstracts away infrastructure complexities so developers can focus only on delivering amazing products.

Replicate had previously raised $40m in venture funding from prominent investors in the US. Integrating Replicate’s community and models with Cloudflare’s global network will create a singular platform for building tomorrow’s next big AI applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Abridge AI scribe allegedly gives doctors an hour back daily

A new study led by Yale University confirmed that Abridge’s ambient AI scribe significantly reduces burnout for medical professionals. Clinicians who used the documentation technology experienced a sharp decline in burnout rates over the first thirty days of use.

AI may offer a scalable solution to administrative demands faced by practitioners nationwide. The quality study, published in ‘Jama Network Open’, examined 263 practitioners across six different healthcare systems.

Burnout rates dropped from 51.9 percent to 38.8 percent after the one-month intervention programme. Secondary analysis showed the AI scribes reduced the odds of burnout by a substantial seventy-four percent.

The ambient AI scribe also led to substantial improvements in the clinicians’ cognitive task load. Practitioners reported they were better able to give undivided attention to patients during their clinical consultations.

High documentation demands are increasing clinician attrition, whilst physician shortages multiply across the sector. Reducing the burdensome administrative load is now critical for maintaining quality patient care and professional well-being.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK uses AI to fight drug-resistant infections

The UK is harnessing AI to combat the growing threat of drug-resistant infections, a crisis often called ‘the silent pandemic’. The Fleming Initiative and GSK will invest £45m in AI research to speed up new antibiotics and combat deadly bacteria and fungi.

The project targets Gram-negative bacteria, such as E. coli and Klebsiella, which resist treatment due to their protective outer layers. Researchers will test different molecules and use AI to identify which can penetrate and persist in these bacteria.

The goal is to shorten years of laboratory work into rapid computational predictions that guide the design of effective antibiotics.

AI will predict how resistant infections emerge and spread, helping scientists anticipate threats early. The initiative will also target deadly fungal infections, such as Aspergillus, which threaten people with weakened immune systems.

Experts hope the approach can outpace bacterial evolution and reduce the human toll from untreatable infections. Fleming Initiative director Alison Holmes emphasised the vital role of antibiotics in modern medicine and warned that overuse has squandered this critical resource.

Tony Wood, GSK’s chief scientific officer, said the project will open new avenues for discovering antibiotics while anticipating resistance, transforming the treatment and prevention of serious infections worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot