EU introduces plan to strengthen consumer protection

The European Commission has unveiled the 2030 Consumer Agenda, a strategic plan to reinforce protection, trust, and competitiveness across the EU.

With 450 million consumers contributing over half of the Union’s GDP, the agenda aims to simplify administrative processes for businesses, rather than adding new burdens, while ensuring fair treatment for shoppers.

The agenda sets four priorities to adapt to rising living costs, evolving online markets, and the surge in e-commerce. Completing the Single Market will remove cross-border barriers, enhance travel and financial services, and evaluate the effectiveness of the Geo-Blocking Regulation.

A planned Digital Fairness Act will address harmful online practices, focusing on protecting children and strengthening consumer rights.

Sustainable consumption takes a central focus, with efforts to combat greenwashing, expand access to sustainable goods, and support circular initiatives such as second-hand markets and repairable products.

The Commission will also enhance enforcement to tackle unsafe or non-compliant products, particularly from third countries, ensuring that compliant businesses are shielded from unfair competition.

Implementation will be overseen through the Annual Consumer Summit and regular Ministerial Forums, which will provide political guidance and monitor progress.

The 2030 Consumer Agenda builds on prior achievements and EU consultations, aiming to modernise consumer protection instead of leaving gaps in a rapidly changing market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Roblox brings in global age checks for chat

Children will no longer be able to chat with adult strangers on Roblox after new global age checks are introduced. The platform will begin mandatory facial estimation in selected countries in December before expanding worldwide in January.

Roblox players will be placed into strict age groups and prevented from messaging older users unless they are verified as trusted contacts. Under-13s will remain barred from private messages unless parents actively approve access within account controls.

The company faces rising scrutiny following lawsuits in several US states, where officials argue Roblox failed to protect young users from harmful contact. Safety groups welcome the tighter rules but warn that monitoring must match the platform’s rapid growth.

Roblox says the technology is accurate and helps deliver safer digital spaces for younger players. Campaigners continue to call for broader protections as millions of children interact across games, chats and AI-enhanced features each day.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU proposal sparks alarm over weakened privacy rules

The Digital Omnibus has been released by the European Commission, prompting strong criticism from privacy advocates. Campaigners argue the reforms would weaken long-standing data protection standards and introduce sweeping changes without proper consultation.

Noyb founder Max Schrems claims the plan favours large technology firms by creating loopholes around personal data and lowering user safeguards. Critics say the proposals emerge despite limited political support from EU governments, civil society groups and several parliamentary factions.

The Omnibus is welcomed by industry which have called for simplification and changes to be made for quite a number of years. These changes should make carrying out business activities simpler for entities which do process vast amounts of data.

The Commission is also accused of rushing (errors can be found in the draft, including references to the GDPR) the process under political pressure, abandoning impact assessments and shifting priorities away from widely supported protections. View our analysis on the matter for a deep dive on the matter.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Health sector AI growth in Europe raises safety concerns

Concerns are growing as European countries expand the use of AI in healthcare without establishing sufficient protections for patients or healthcare workers.

A new World Health Organisation report found significant disparities in how nations develop, regulate and fund AI tools.

Some countries are rapidly deploying chatbots, imaging systems and data-analysis tools, while others have barely started integrating AI into their health services. Only four nations across Europe and Central Asia currently have a national strategy dedicated to AI in health care.

WHO officials warn that weak safeguards could lead to biassed algorithms, medical errors and increased inequality in access to care.

The report urges governments to strengthen legal frameworks, train health workers in AI literacy and ensure these technologies are rigorously tested before reaching patients.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sundar Pichai warns users not to trust AI tools easily

Google CEO Sundar Pichai advises people not to unquestioningly trust AI tools, warning that current models remain prone to errors. He told the BBC that users should rely on a broader information ecosystem rather than treat AI as a single source of truth.

Pichai said generative systems can produce inaccuracies and stressed that people must learn what the tools are good at. The remarks follow criticism of Google’s own AI Overviews feature, which attracted attention for erratic and misleading responses during its rollout.

Experts say the risk grows when users depend on chatbots for health, science, or news. BBC research found major AI assistants misrepresented news stories in nearly half of the tests this year, underscoring concerns about factual reliability and the limits of current models.

Google is launching Gemini 3.0, which it claims offers stronger multimodal understanding and reasoning. The company says its new AI Mode in search marks a shift in how users interact with online information, as it seeks to defend market share against ChatGPT and other rivals.

Pichai says Google is increasing its investment in AI security and releasing tools to detect AI-generated images. He maintains that no single company should control such powerful technology and argues that the industry remains far from a scenario in which one firm dominates AI development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA pushes forward with AI-ready data

Enterprises are facing growing pressure to prepare unstructured data for use in modern AI systems as organisations struggle to turn prototypes into production tools.

Around forty percent of AI projects advance beyond the pilot phase, largely due to limits in data quality and availability. Most organisational information now comes in unstructured form, ranging from emails to video files, which offers little coherence and places a heavy load on governance systems.

AI agents need secure, recent and reliable data instead of fragmented information scattered across multiple storage silos. Preparing such data demands extensive curation, metadata work, semantic chunking and the creation of vector embeddings.

Enterprises also struggle with the rising speed of data creation and the spread of duplicate copies, which increases both operational cost and security concerns.

An emerging approach by NVIDIA, known as the AI data platform, aims to address these challenges by embedding GPU acceleration directly into the data path. The platform prepares and indexes information in place, allowing enterprises to reduce data drift, strengthen governance and avoid unnecessary replication.

Any change to a source document is immediately reflected in the associated AI representations, improving accuracy and consistency for business applications.

NVIDIA is positioning its own AI Data Platform reference design as a next step for enterprise storage. The design combines RTX PRO 6000 Blackwell Server Edition GPUs, BlueField three DPUs and integrated AI processing pipelines.

Leading technology providers including Cisco, Dell Technologies, IBM, HPE, NetApp, Pure Storage and others have adopted the model as they prepare storage systems for broader use of generative AI in the enterprise sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Intuit expand financial AI collaboration

Yesterday, OpenAI and Intuit announced a major strategic partnership aimed at reshaping how people manage their personal and business finances. The arrangement will allow Intuit apps to appear directly inside ChatGPT, enabling secure and personalised financial actions within a single environment.

An agreement that is worth more than one hundred million dollars and reinforces Intuit’s long-term push to strengthen its AI-driven expert platform.

Intuit will broaden its use of OpenAI’s most advanced models to support financial tasks across its products. Frontier models will help power AI agents that assist with tax preparation, cash flow forecasting, payroll management and wider financial planning.

Intuit will also continue using ChatGPT Enterprise internally so employees can work with greater speed and accuracy.

The partnership is expected to help consumers make more informed financial choices instead of relying on fragmented tools. Users will be able to explore suitable credit offers, receive clearer tax answers, estimate refunds and connect with tax specialists.

Businesses will gain tailored insights based on real time data that can improve cash flow, automate customer follow ups and support more effective outreach through email marketing.

Leaders from both companies argue that the collaboration will give people and firms a meaningful financial advantage. They say greater personalisation, deeper data analysis and more effortless decision making will support stronger household finances and more resilient small enterprises.

The deal expands the growing community of OpenAI enterprise customers and strengthens Intuit’s position in global financial technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google enters a new frontier with Gemini 3

A new phase of its AI strategy has begun for Google with the release of Gemini 3, which arrives as the company’s most advanced model to date.

The new system prioritises deeper reasoning and more subtle multimodal understanding, enabling users to approach difficult ideas with greater clarity instead of relying on repetitive prompting. It marks a major step for Google’s long-term project to integrate stronger intelligence into products used by billions.

Gemini 3 Pro is already available in preview across the Gemini app, AI Mode in Search, AI Studio, Vertex AI and Google’s new development platform known as Antigravity.

A model that performs at the top of major benchmarks in reasoning, mathematics, tool use and multimodal comprehension, offering substantial improvements compared with Gemini 2.5 Pro.

Deep Think mode extends the model’s capabilities even further, reaching new records on demanding academic and AGI-oriented tests, although Google is delaying wider release until additional safety checks conclude.

Users can rely on Gemini 3 to learn complex topics, analyse handwritten material, decode long academic texts or translate lengthy videos into interactive guides instead of navigating separate tools.

Developers benefit from richer interactive interfaces, more autonomous coding agents and the ability to plan tasks over longer horizons.

Google Antigravity enhances this shift by giving agents direct control of the development environment, allowing them to plan, write and validate code independently while remaining under human supervision.

Google emphasises that Gemini 3 is its most extensively evaluated model, supported by independent audits and strengthened protections against manipulation. The system forms the foundation for Google’s next era of agentic, personalised AI and will soon expand with additional models in the Gemini 3 series.

The company expects the new generation to reshape how people learn, build and organise daily tasks instead of depending on fragmented digital services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok launches new tools to manage AI-generated content

TikTok has announced new tools to help users shape and understand AI-generated content (AIGC) in their feeds. A new ‘Manage Topics’ control will let users adjust how much AI content appears in their For You feeds alongside keyword filters and the ‘not interested’ option.

The aim is to personalise content rather than remove it entirely.

To strengthen transparency, TikTok is testing ‘invisible watermarking’ for AI-generated content created with TikTok tools or uploaded using C2PA Content Credentials. Combined with creator labels and AI detection, these watermarks help track and identify content even if edited or re-uploaded.

The platform has launched a $2 million AI literacy fund to support global experts in creating educational content on responsible AI. TikTok collaborates with industry partners and non-profits like Partnership on AI to promote transparency, research, and best practices.

Investments in AI extend beyond moderation and labeling. TikTok is developing innovative features such as Smart Split and AI Outline to enhance creativity and discovery, while using AI to protect user safety and improve the well-being of its trust and safety teams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Poll manipulation by AI threatens democratic accuracy, according to a new study

Public opinion surveys face a growing threat as AI becomes capable of producing highly convincing fake responses. New research from Dartmouth shows that AI-generated answers can pass every quality check, imitate real human behaviour and alter poll predictions without leaving evidence.

In several major polls conducted before the 2024 US election, inserting only a few dozen synthetic responses would have reversed expected outcomes.

The study reveals how easily malicious actors could influence democratic processes. AI models can operate in multiple languages yet deliver flawless English answers, allowing foreign groups to bypass detection.

An autonomous synthetic respondent that was created for the study passed nearly all attention tests, avoided errors in logic puzzles and adjusted its tone to match assigned demographic profiles instead of exposing its artificial nature.

The potential consequences extend far beyond electoral polling. Many scientific disciplines rely heavily on survey data to track public health risks, measure consumer behaviour or study mental wellbeing.

If AI-generated answers infiltrate such datasets, the reliability of thousands of studies could be compromised, weakening evidence used to shape policy and guide academic research.

Financial incentives further raise the risk. Human participants earn modest fees, while AI can produce survey responses at almost no cost. Existing detection methods failed to identify the synthetic respondent at any stage.

The researcher urges survey companies to adopt new verification systems that confirm the human identity of participants, arguing that stronger safeguards are essential to protect democratic accountability and the wider research ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!