EU introduces plan to strengthen consumer protection

The European Commission has unveiled the 2030 Consumer Agenda, a strategic plan to reinforce protection, trust, and competitiveness across the EU.

With 450 million consumers contributing over half of the Union’s GDP, the agenda aims to simplify administrative processes for businesses, rather than adding new burdens, while ensuring fair treatment for shoppers.

The agenda sets four priorities to adapt to rising living costs, evolving online markets, and the surge in e-commerce. Completing the Single Market will remove cross-border barriers, enhance travel and financial services, and evaluate the effectiveness of the Geo-Blocking Regulation.

A planned Digital Fairness Act will address harmful online practices, focusing on protecting children and strengthening consumer rights.

Sustainable consumption takes a central focus, with efforts to combat greenwashing, expand access to sustainable goods, and support circular initiatives such as second-hand markets and repairable products.

The Commission will also enhance enforcement to tackle unsafe or non-compliant products, particularly from third countries, ensuring that compliant businesses are shielded from unfair competition.

Implementation will be overseen through the Annual Consumer Summit and regular Ministerial Forums, which will provide political guidance and monitor progress.

The 2030 Consumer Agenda builds on prior achievements and EU consultations, aiming to modernise consumer protection instead of leaving gaps in a rapidly changing market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Young wealthy investors push advisers towards broader crypto access

A rising number of young, high-earning Americans are moving away from wealth advisers who fail to offer crypto access, signalling a sharp generational divide in portfolio expectations.

New survey results from Zerohash show that 35 percent of affluent investors aged 18 to 40 have already redirected funds to advisers who support digital-asset allocations, often shifting between $250,000 and $1 million.

Confidence in crypto has strengthened as major financial institutions accelerate adoption. Zerohash reported that more than four-fifths of surveyed investors feel more assured in the asset class thanks to involvement from BlackRock, Fidelity and Morgan Stanley.

Wealthier respondents proved the least patient. Half of those earning above $500,000 said they had already replaced advisers who lack crypto exposure, and 84 percent plan to expand their holdings over the coming year.

Demand now extends well beyond Bitcoin and Ethereum. Ninety-two percent want access to a wider range of digital assets, mirroring expanding interest in altcoin-based ETFs and staking products.

Asset managers are responding quickly, with 21Shares launching its Solana ETF in the US and BlackRock preparing a staked Ether product. The Solana category alone has attracted more than $420 million in inflows, underscoring the rising appetite for institutional-grade exposure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU proposal sparks alarm over weakened privacy rules

The Digital Omnibus has been released by the European Commission, prompting strong criticism from privacy advocates. Campaigners argue the reforms would weaken long-standing data protection standards and introduce sweeping changes without proper consultation.

Noyb founder Max Schrems claims the plan favours large technology firms by creating loopholes around personal data and lowering user safeguards. Critics say the proposals emerge despite limited political support from EU governments, civil society groups and several parliamentary factions.

The Omnibus is welcomed by industry which have called for simplification and changes to be made for quite a number of years. These changes should make carrying out business activities simpler for entities which do process vast amounts of data.

The Commission is also accused of rushing (errors can be found in the draft, including references to the GDPR) the process under political pressure, abandoning impact assessments and shifting priorities away from widely supported protections. View our analysis on the matter for a deep dive on the matter.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI tools aim to speed discovery of effective HIV vaccines

Over 40 million people worldwide are living with HIV, a chronic infection that remains a leading cause of death. Developing an effective vaccine has proven difficult due to the virus’s rapid mutations and the vast volume of clinical data produced during trials.

Scripps Research has received $1.1 million from CHAVD to purchase high-performance computing and AI technology. The investment lets researchers analyse millions of vaccine candidates faster, speeding antibody identification and refining experimental vaccines.

StepwiseDesign enables the AI system to evaluate vaccine-induced antibodies and identify the most promising candidates for development. The system has found rare antibodies that neutralise HIV in uninfected individuals, showing its ability to detect extremely rare precursors.

Researchers hope the computational framework will not only fast-track HIV vaccine development but also be applied to other complex pathogens, including influenza and malaria. The project highlights collaboration and innovation, with potential to improve global health outcomes for millions at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Sundar Pichai warns users not to trust AI tools easily

Google CEO Sundar Pichai advises people not to unquestioningly trust AI tools, warning that current models remain prone to errors. He told the BBC that users should rely on a broader information ecosystem rather than treat AI as a single source of truth.

Pichai said generative systems can produce inaccuracies and stressed that people must learn what the tools are good at. The remarks follow criticism of Google’s own AI Overviews feature, which attracted attention for erratic and misleading responses during its rollout.

Experts say the risk grows when users depend on chatbots for health, science, or news. BBC research found major AI assistants misrepresented news stories in nearly half of the tests this year, underscoring concerns about factual reliability and the limits of current models.

Google is launching Gemini 3.0, which it claims offers stronger multimodal understanding and reasoning. The company says its new AI Mode in search marks a shift in how users interact with online information, as it seeks to defend market share against ChatGPT and other rivals.

Pichai says Google is increasing its investment in AI security and releasing tools to detect AI-generated images. He maintains that no single company should control such powerful technology and argues that the industry remains far from a scenario in which one firm dominates AI development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WHO warns Europe faces widening risks as AI outpaces regulation

A new WHO Europe report warns that AI is advancing faster than health policies can keep up, risking wider inequalities without stronger safeguards. AI already helps doctors with diagnostics, reduces paperwork and improves patient communication, yet significant structural safeguards remain incomplete.

The assessment, covering 50 participating countries across the region, shows that governments acknowledge AI’s transformative potential in personalised medicine, disease surveillance and clinical efficiency. Only a small number, however, have established national strategies.

Estonia, Finland and Spain stand out for early adoption- whether through integrated digital records, AI training programmes or pilots in primary care- but most nations face mounting regulatory gaps.

Legal uncertainty remains the most common obstacle, with 86 percent of countries citing unclear rules as the primary barrier to adoption, followed by financial constraints. Fewer than 10 percent have liability standards defining responsibility when AI-driven decisions cause harm.

WHO urged governments to align AI policy with public health goals, strengthen legal and ethical frameworks, improve cross-border data governance and invest in an AI-literate workforce to ensure patients stay at the centre of the transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta wins antitrust case over monopoly claims

Meta has defeated a major antitrust challenge after a US federal judge ruled it does not currently hold monopoly power in social networking. The decision spares the company from being forced to separate Instagram and WhatsApp, which regulators had argued were acquired to suppress competition.

The judge found the Federal Trade Commission failed to prove Meta maintains present-day dominance, noting that the market has been reshaped by rivals such as TikTok. Meta argued it now faces intense competition across mobile platforms as user behaviour shifts rapidly.

FTC lawyers revisited internal emails linked to Meta’s past acquisitions, but the ruling emphasised that the case required proof of ongoing violations.

Analysts say the outcome contrasts sharply with recent decisions against Google in search and advertising, signalling mixed fortunes for large tech firms.

Industry observers note that Meta still faces substantial regulatory pressure, including upcoming US trials regarding children’s mental health and questions about its heavy investment in AI.

The company welcomed the ruling and stated that it intends to continue developing products within a competitive market framework.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI and Intuit expand financial AI collaboration

Yesterday, OpenAI and Intuit announced a major strategic partnership aimed at reshaping how people manage their personal and business finances. The arrangement will allow Intuit apps to appear directly inside ChatGPT, enabling secure and personalised financial actions within a single environment.

An agreement that is worth more than one hundred million dollars and reinforces Intuit’s long-term push to strengthen its AI-driven expert platform.

Intuit will broaden its use of OpenAI’s most advanced models to support financial tasks across its products. Frontier models will help power AI agents that assist with tax preparation, cash flow forecasting, payroll management and wider financial planning.

Intuit will also continue using ChatGPT Enterprise internally so employees can work with greater speed and accuracy.

The partnership is expected to help consumers make more informed financial choices instead of relying on fragmented tools. Users will be able to explore suitable credit offers, receive clearer tax answers, estimate refunds and connect with tax specialists.

Businesses will gain tailored insights based on real time data that can improve cash flow, automate customer follow ups and support more effective outreach through email marketing.

Leaders from both companies argue that the collaboration will give people and firms a meaningful financial advantage. They say greater personalisation, deeper data analysis and more effortless decision making will support stronger household finances and more resilient small enterprises.

The deal expands the growing community of OpenAI enterprise customers and strengthens Intuit’s position in global financial technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google enters a new frontier with Gemini 3

A new phase of its AI strategy has begun for Google with the release of Gemini 3, which arrives as the company’s most advanced model to date.

The new system prioritises deeper reasoning and more subtle multimodal understanding, enabling users to approach difficult ideas with greater clarity instead of relying on repetitive prompting. It marks a major step for Google’s long-term project to integrate stronger intelligence into products used by billions.

Gemini 3 Pro is already available in preview across the Gemini app, AI Mode in Search, AI Studio, Vertex AI and Google’s new development platform known as Antigravity.

A model that performs at the top of major benchmarks in reasoning, mathematics, tool use and multimodal comprehension, offering substantial improvements compared with Gemini 2.5 Pro.

Deep Think mode extends the model’s capabilities even further, reaching new records on demanding academic and AGI-oriented tests, although Google is delaying wider release until additional safety checks conclude.

Users can rely on Gemini 3 to learn complex topics, analyse handwritten material, decode long academic texts or translate lengthy videos into interactive guides instead of navigating separate tools.

Developers benefit from richer interactive interfaces, more autonomous coding agents and the ability to plan tasks over longer horizons.

Google Antigravity enhances this shift by giving agents direct control of the development environment, allowing them to plan, write and validate code independently while remaining under human supervision.

Google emphasises that Gemini 3 is its most extensively evaluated model, supported by independent audits and strengthened protections against manipulation. The system forms the foundation for Google’s next era of agentic, personalised AI and will soon expand with additional models in the Gemini 3 series.

The company expects the new generation to reshape how people learn, build and organise daily tasks instead of depending on fragmented digital services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The future of the EU data protection under the Omnibus Package

Introduction and background information

The Commission claims that the Omnibus Package aims to simplify certain European Union legislation to strengthen the Union’s long-term competitiveness. A total of six omnibus packages have been announced in total.

The latest (no. 4) targets small mid-caps and digitalisation. Package no. 4 covers data legislation, cookies and tracking technologies (i.e. the General Data Protection Regulation (GDPR) and ePrivacy Directive (ePD)), as well as cybersecurity incident reporting and adjustments to the Artificial Intelligence Act (AIA).

That ‘simplification’ is part of a broader agenda to appease business, industry and governments who argue that the EU has too much red tape. In her September 2025 speech to German economic and business associations, Ursula von der Leyen sided with industry and stated that simplification is ‘the only way to remain competitive’.

As for why these particular laws were selected, the rationale is unclear. One stated motivation for including the GDPR is its mention in Mario Draghi’s 2024 report on ‘The Future of European Competitiveness’.

Draghi, the former President of the European Central Bank, focused on innovation in advanced technologies, decarbonisation and competitiveness, as well as security. Yet, the report does not outline any concrete way in which the GDPR allegedly reduces competitiveness or requires revision.

The GDPR appears only twice in the report. First, as a brief reference to regulatory fragmentation affecting the reuse of sensitive health data across Member States (MS).

Second, in the concluding remarks, it is claimed that ‘the GDPR in particular has been implemented with a large degree of fragmentation which undermines the EU’s digital goals’. There is, however, no explanation of this ‘large fragmentation’, no supporting evidence, and no dedicated section on the GDPR as its first mention being buried in the R&I (research and innovation) context.

It is therefore unclear what legal or analytical basis the Commission relies on to justify including the GDPR in this simplification exercise.

The current debate

There are two main sides to this Omnibus, which are the privacy forward and the competitive/SME side. The two need not be mutually exclusive, but civil society warns that ‘simplification’ risks eroding privacy protection. Privacy advocates across civil society expressed strong concern and opposition to simplification in their responses to the European Commission’s recent call for evidence.

Industry positions vary in tone and ambition. For example, CrowdStrike calls for greater legal certainty under the Cybersecurity Act, such as making recital 55 binding rather than merely guiding and introducing a one-stop-shop mechanism for incident reporting.

Meta, by contrast, urges the Commission to go beyond ‘easing administrative burdens’, calling for a pause in AI Act enforcement and a sweeping reform of the EU data protection law. On the civil society side, Access Now argues that fundamental rights protections are at stake.

It warns that any reduction in consent prompts could allow tracking technologies to operate without users ever being given a real opportunity to refuse. A more balanced, yet cautious line can be found in the EDPB and EDPS joint opinion regarding easing records of processing activities for SMEs.

Similar to the industry, they support reducing administrative burdens, but with the caveat that amendments should not compromise the protection of fundamental rights, echoing key concerns of civil society.

Regarding Member State support, Estonia, France, Austria and Slovenia are firmly against any reopening of the GDPR. By contrast, the Czech Republic, Finland and Poland propose targeted amendments while Germany proposes a more systematic reopening of the GDPR.

Individual Members of the European Parliament have also come out in favour of reopening, notably Aura Salla, a Finnish centre-right MEP who previously headed Meta’s Brussels lobbying office.

Therefore, given the varied opinions, it cannot be said what the final version of the Omnibus would look like. Yet, a leaked draft document of the GDPR’s potential modifications suggests otherwise. Upon examination, it cannot be disputed that the views from less privacy-friendly entities have served as a strong guiding path.

Leaked draft document main changes

The leaked draft introduces several core changes.

Those changes include a new definition of personal and sensitive data, the use of legitimate interest (LI) for AI processing, an intertwining of the ePrivacy Directive (ePD) and GDPR, data breach reforms, a centralised data protection impact assessment (DPIA) whitelist/blacklist, and access rights being conditional on motive for use.

A new definition of personal data

The draft redefines personal data so that ‘information is not personal data for everyone merely because another entity can identify that natural person’. That directly contradicts established EU case law, which holds that if an entity can, with reasonable means, identify a natural person, then the information is personal data, regardless of who else can identify that person.

A new definition of sensitive data

Under current rules, inferred information can be sensitive personal data. If a political opinion is inferred from browsing history, that inference is protected.

The draft would narrow this by limiting sensitive data to information that ‘directly reveals’ special categories (political views, health, religion, sexual orientation, race/ethnicity, trade union membership). That would remove protection from data derived through profiling and inference.

Detected patterns, such as visits to a health clinic or political website, would no longer be treated as sensitive, and only explicit statements similar to ‘I support the EPP’ or ‘I am Muslim’ would remain covered.

Intertwining article 5(3) ePD and the GDPR

Article 5(3) ePD is effectively copied into the GDPR as a new Article 88a. Article 88a would allow the processing of personal data ‘on or from’ terminal equipment where necessary for transmission, service provision, creating aggregated information (e.g. statistics), or for security purposes, alongside the existing legal bases in Articles 6(1) and 9(2) of the GDPR.

That generates confusion about how these legal bases interact, especially when combined with AI processing under LI. Would this mean that personal data ‘on or from’ a terminal equipment may be allowed if it is done by AI?

The scope is widened. The original ePD covered ‘storing of information, or gaining access to information already stored, in the terminal equipment’. The draft instead regulates any processing of personal data ‘on or from’ terminal equipment. That significantly expands the ePD’s reach and would force controllers to reassess and potentially adapt a broad range of existing operations.

LI for AI personal data processing

A new Article 88c GDPR, ‘Processing in the context of the development and operation of AI’, would allow controllers to rely on LI to process personal data for AI processing. That move would largely sideline data subject control. Businesses could train AI systems on individuals’ images, voices or creations without obtaining consent.

A centralised data breach portal, deadline extension and change in threshold reporting

The draft introduces three main changes to data breach reporting.

  • Extending the notification deadline from 72 to 96 hours, giving privacy teams more time to investigate and report.
  • A single EU-level reporting portal, simplifying reporting for organisations active in multiple MS.
  • Raising the notification threshold when the rights and freedoms of data subjects are at ‘risk’ to ‘high risk’.

The first two changes are industry-friendly measures designed to streamline operations. The third is more contentious. While industry welcomes fewer reporting obligations, civil society warns that a ‘high-risk’ threshold could leave many incidents unreported. Taken together, these reforms simplify obligations, albeit at the potential cost of reducing transparency.

Centralised processing activity (PA) list requiring a DPIA

This is another welcome change as it would clarify which PAs would automatically require a DPIA and which would not. The list would be updated every 3 years.

What should be noted here is that some controllers may not see their PA on this list and assume or argue that a DPIA is not required. Therefore, the language on this should make it clear that it is not a closed list.

Access requests denials

Currently, a data subject may request a copy of their data regardless of the motive. Under the draft, if a data subject exploits the right of access by using that material against the controller, the controller may charge or refuse the request.

That is problematic for the protection of rights as it impacts informational self-determination and weakens an important enforcement tool for individuals.

For more information, an in depth analysis by noyb has been carried out which can be accessed here.

The Commission’s updated version

On 19 November, the European Commission is expected to present its official simplification package. This section will be updated once the final text is published.

Final remarks

Simplification in itself is a good idea, and businesses need to have enough freedom to operate without being suffocated with red tape. However, changing a cornerstone of data protection law to such an extent that it threatens fundamental rights protections is just cause for concern.

Alarms have already been raised after the previous Omnibus package on green due diligence obligations was scrapped. We may now be witnessing a similar rollback, this time targeting digital rights.

As a result, all eyes are on 19 November, a date that could reshape not only the EU privacy standards but also global data protection norms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!