The EU’s attempt to revise core privacy rules has faced resistance from France, which argues that the Commission’s proposals would weaken rather than strengthen long-standing protections.
Paris objects strongly to proposed changes to the definition of personal data within the General Data Protection Regulation, which remains the foundation of European privacy law. Officials have also raised concerns about several more minor adjustments included in the broader effort to modernise digital legislation.
These proposals form part of the Digital Omnibus package, a set of updates intended to streamline the EU data rules. France argues that altering the GDPR’s definitions could change the balance between data controllers, regulators and citizens, creating uncertainty for national enforcement bodies.
The national government maintains that the existing framework already includes the flexibility needed to interpret sensitive information.
A disagreement that highlights renewed tension inside the Union as institutions examine the future direction of privacy governance.
Several member states want greater clarity in an era shaped by AI and cross-border data flows. In contrast, others fear that opening the GDPR could lead to inconsistent application across Europe.
Talks are expected to continue in the coming months as EU negotiators weigh the political risks of narrowing or widening the scope of personal data.
France’s firm stance suggests that consensus may prove difficult, particularly as governments seek to balance economic goals with unwavering commitments to user protection.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Institutions in the EU have begun designing a new framework to help European armies share defence information securely, rather than relying on US technology.
A plan centred on creating a military-grade data platform, the European Defence Artificial Intelligence Data Space, is intended to support sensitive exchanges among defence authorities.
Ultimately, the approach aims to replace the current patchwork of foreign infrastructure that many member states rely on to store and transfer national security data.
The European Defence Agency is leading the effort and expects the platform to be fully operational by 2030. The concept includes two complementary elements: a sovereign military cloud for data storage and a federated system that allows countries to exchange information on a trusted basis.
Officials argue that this will improve interoperability, speed up joint decision-making, and enhance operational readiness across the bloc.
A project that aligns with broader concerns about strategic autonomy, as EU leaders increasingly question long-standing dependencies on American providers.
Several European companies have been contracted to develop the early technical foundations. The next step is persuading governments to coordinate future purchases so their systems remain compatible with the emerging framework.
Planning documents suggest that by 2029, member states should begin integrating the data space into routine military operations, including training missions and coordinated exercises. EU authorities maintain that stronger control of defence data will be essential as military AI expands across European forces.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Hamad Bin Khalifa University has unveiled the UNESCO Chair on Digital Technologies and Human Behaviour to strengthen global understanding of how emerging tools shape society.
An initiative, located in the College of Science and Engineering in Qatar, that will examine the relationship between digital adoption and human behaviour, focusing on digital well-being, ethical design and healthier online environments.
The Chair is set to address issues such as internet addiction, cyberbullying and misinformation through research and policy-oriented work.
By promoting dialogue among international organisations, governments and academic institutions, the programme aims to support the more responsible development of digital technologies rather than approaches that overlook societal impact.
HBKU’s long-standing emphasis on ethical innovation formed the foundation for the new initiative. The launch event brought together experts from several disciplines to discuss behavioural change driven by AI, mobile computing and social media.
An expert panel considered how GenAI can improve daily life while also increasing dependency, encouraging users to shift towards a more intentional and balanced relationship with AI systems.
UNESCO underlined the importance of linking scientific research with practical policymaking to guide institutions and communities.
The Chair is expected to strengthen cooperation across sectors and support progress on global development goals by ensuring digital transformation remains aligned with human dignity, social cohesion and inclusive growth.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A proposal filed with the US Federal Communications Commission seeks approval for a constellation of up to one million solar-powered satellites designed to function as orbiting data centres for artificial intelligence computing, according to documents submitted by SpaceX.
The company described the network as an efficient response to growing global demand for AI processing power, positioning space-based infrastructure as a new frontier for large-scale computation.
In its filing, SpaceX framed the project in broader civilisational terms, suggesting the constellation could support humanity’s transition towards harnessing the Sun’s full energy output and enable long-term multi-planetary development.
Regulators are unlikely to approve the full scale immediately, with analysts viewing the figure as a negotiating position. The USFCC recently authorised thousands of additional Starlink satellites while delaying approval for a larger proposed expansion.
Concerns continue to grow over orbital congestion, space debris, and environmental impacts, as satellite numbers rise sharply and rival companies seek similar regulatory extensions.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The UK and Bulgaria are expanding cooperation on semiconductor technology to strengthen supply chains and support Europe’s growing need for advanced materials.
A partnership that links British expertise with the ambitions of Bulgaria under the EU Chips Act 2023, creating opportunities for investment, innovation and skills development.
The Science and Technology Network has acted as a bridge between both countries by bringing together government, industry and academia. A high-level roundtable in Sofia, a study visit to Scotland and a trade mission to Bulgaria encouraged firms and institutions to explore new partnerships.
These exchanges helped shape joint projects and paved the way for shared training programmes.
Several concrete outcomes have followed. A €350 million Green Silicon Carbide wafer factory is moving ahead, supported by significant UK export wins.
Universities in Glasgow and Sofia have signed a research memorandum, while TechWorks UK and Bulgaria’s BASEL have agreed on an industry partnership. The next phase is expected to focus on launching the new factory, deepening research cooperation and expanding skills initiatives.
Bulgaria’s fast-growing electronics and automotive sectors have strengthened its position as a key European manufacturing hub. The country produces most sensors used in European cars and hosts modern research centres and smart factories.
The combined effect of the EU funding, national investment and international collaboration is helping Bulgaria secure a prominent role in Europe’s semiconductor supply chain.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has confirmed that several legacy AI models will be removed from ChatGPT, with GPT-4o scheduled for retirement on 13 February. The decision follows months of debate after the company reinstated the model amid strong user backlash.
Alongside GPT-4o, the models being withdrawn include GPT-5 Instant, GPT-5 Thinking, GPT-4.1, GPT-4.1 mini, and o4-mini. The changes apply only to ChatGPT, while developers will continue to access the models through OpenAI’s API.
GPT-4o had built a loyal following for its natural writing style and emotional awareness, with many users arguing newer models felt less expressive. When OpenAI first attempted to phase it out in 2025, widespread criticism prompted a temporary reversal.
Company data now suggests active use of GPT-4o has dropped to around 0.1% of daily users. OpenAI says features associated with the model have since been integrated into GPT-5.2, including personality tuning and creative response controls.
Despite this, criticism has resurfaced across social platforms, with users questioning usage metrics and highlighting that GPT-4o was no longer prominently accessible. Comments from OpenAI leadership acknowledging recent declines in writing quality have further fuelled concerns about the model’s removal.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Poland has disclosed a coordinated cyber sabotage campaign targeting more than 30 renewable energy sites in late December 2025. The incidents occurred during severe winter weather and were intended to cause operational disruption, according to CERT Polska.
Electricity generation and heat supply in Poland continued, but attackers disabled communications and remote control systems across multiple facilities. Both IT networks and industrial operational technology were targeted, marking a rare shift toward destructive cyber activity against energy infrastructure.
Investigators found attackers accessed renewable substations through exposed FortiGate devices, often without multi-factor authentication. After breaching networks, they mapped systems, damaged firmware, wiped controllers, and disabled protection relays.
Two previously unknown wiper tools, DynoWiper and LazyWiper, were used to corrupt and delete data without ransom demands. The malware spread through compromised Active Directory systems using malicious Group Policy tasks to trigger simultaneous destruction.
CERT Polska linked the infrastructure to the Russia-connected threat cluster Static Tundra, though some firms suggest Sandworm involvement. The campaign marks the first publicly confirmed destructive operation attributed to this actor, highlighting rising cyber-sabotage risks to critical energy systems.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI is increasingly being used to answer questions about faith, morality, and suffering, not just everyday tasks. As AI systems become more persuasive, religious leaders are raising concerns about the authority people may assign to machine-generated guidance.
Within this context, Catholic outlet EWTN Vatican examined Magisterium AI, a platform designed to reference official Church teaching rather than produce independent moral interpretations. Its creators say responses are grounded directly in doctrinal sources.
Founder Matthew Sanders argues mainstream AI models are not built for theological accuracy. He warns that while machines sound convincing, they should never be treated as moral authorities without grounding in Church teaching.
Church leaders have also highlighted broader ethical risks associated with AI, particularly regarding human dignity and emotional dependency. Recent Vatican discussions stressed the need for education and safeguards.
Supporters say faith-based AI tools can help navigate complex religious texts responsibly. Critics remain cautious, arguing spiritual formation should remain rooted in human guidance.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A leading British think tank has urged the government to introduce ‘nutrition labels’ for AI-generated news, arguing that clearer rules are needed as AI becomes a dominant source of information.
The Institute for Public Policy Research said AI firms are increasingly acting as new gatekeepers of the internet and must pay publishers for the journalism that shapes their output.
The group recommended standardised labels showing which sources underpin AI-generated answers, instead of leaving users unsure about the origin or reliability of the material they read.
It also called for a formal licensing system in the UK that would allow publishers to negotiate directly with technology companies over the use of their content. The move comes as a growing share of the public turns to AI for news, while Google’s AI summaries reach billions each month.
IPPR’s study found that some AI platforms rely heavily on content from outlets with licensing agreements, such as the Guardian and the Financial Times, while others, like the BBC, appear far less often due to restrictions on scraping.
The think tank warned that such patterns could weaken media plurality by sidelining local and smaller publishers instead of supporting a balanced ecosystem. It added that Google’s search summaries have already reduced traffic to news websites by providing answers before users click through.
The report said public funding should help sustain investigative and local journalism as AI tools expand. OpenAI responded that its products highlight sources and provide links to publishers, arguing that careful design can strengthen trust in the information people see online.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Massachusetts Institute of Technology researchers have developed a compact ultrasound system designed to make breast cancer screening more accessible and frequent, particularly for people at higher risk.
The portable device could be used in doctors’ offices or at home, helping detect tumours earlier than current screening schedules allow.
The system pairs a small ultrasound probe with a lightweight processing unit to deliver real-time 3D images via a laptop. Researchers say its portability and low power use could improve access in rural areas where traditional ultrasound machines are impractical.
Frequent monitoring is critical, as aggressive interval cancers can develop between routine mammograms and account for up to 30% of breast cancer cases.
By enabling regular ultrasound scans without specialised technicians or bulky equipment, the technology could increase early detection rates, where survival outcomes are significantly higher.
Initial testing successfully produced clear, gap-free 3D images of breast tissue, and larger clinical trials are now underway at partner hospitals. The team is developing a smaller version that could connect to a smartphone and be integrated into a wearable device for home use.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!