DIGITALEUROPE urges changes to EU AI Act rules for industry

European industry representatives are urging policymakers to reconsider parts of the EU AI Act, arguing that the current framework could impose significant compliance costs on companies developing AI tools for industrial and medical technologies.

According to Cecilia Bonfeld-Dahl, director-general of DIGITALEUROPE, manufacturers of high-tech machines, medical devices, and radio equipment are already subject to strict product safety regulations. Adding AI-specific requirements could create unnecessary administrative burdens for companies already heavily regulated. She argues that policymakers should aim for balanced AI regulation that encourages innovation while maintaining safety standards.

Industry groups warn that classifying certain AI systems as high-risk under Annex I of the AI Act could be particularly costly for smaller firms. DIGITALEUROPE estimates that a company with around 50 employees developing an AI-based product could incur initial compliance costs of €320,000 to €600,000, followed by annual expenses of up to €150,000. According to the organisation, such costs could reduce profits significantly and discourage smaller companies from pursuing AI innovation.

Manufacturing and medical technology sectors across Europe employ millions of workers and increasingly rely on AI to improve product performance and safety. Industry representatives argue that many applications, such as AI systems used to enhance industrial equipment safety or improve medical devices, already operate under established regulatory frameworks. These existing frameworks could be adapted rather than introducing additional layers of regulation.

The broader regulatory landscape is also contributing to concerns among technology companies. Over the past six years, the EU has introduced nearly 40 new technology-related regulations, some of which overlap or impose similar compliance requirements. DIGITALEUROPE estimates that compliance with the AI Act could cost companies approximately €3.3 billion annually, while cybersecurity and data-sharing regulations add further financial obligations.

Industry leaders warn that rising compliance costs could affect investment in AI development across Europe. Current estimates suggest that the EU accounts for about 7.5% of global AI investment, significantly behind the United States and China.

DIGITALEUROPE has called on the EU institutions to consider postponing parts of the AI Act’s implementation timeline to allow further discussion on how high-risk AI systems should be defined. Supporters of this approach argue that additional consultation could help ensure the regulatory framework protects consumers while also enabling European companies to compete globally in the rapidly evolving AI sector.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New venture aims to build AI that understands the real world

AI pioneer Yann LeCun has secured more than $1 billion in funding for a new startup that aims to rethink how AI systems learn about the world.

The venture, called Advanced Machine Intelligence (AMI), will focus on developing AI that learns from real-world signals, such as camera and sensor data, rather than relying primarily on text. According to this French company, such systems could make better decisions by understanding how events unfold in the physical world.

AMI plans to build what researchers call ‘world models’, AI systems designed to predict the consequences of actions before they happen. Developers believe that grounding AI in real-world data could make the technology more reliable and easier to control, especially in critical safety applications.

Operations will span several global research hubs, including Paris, New York City, Montreal and Singapore. The company has already begun assembling its leadership team, appointing entrepreneur Alex LeBrun as chief executive and AI researcher Saining Xie as chief science officer.

Support for the project quickly appeared online. Emmanuel Macron welcomed the launch, saying it represented a new chapter in AI and highlighting the role of researchers and innovators in shaping the technology’s future.

LeCun is widely regarded as one of the key figures behind modern AI. In 2018, he shared the prestigious Turing Award with fellow researchers Geoffrey Hinton and Yoshua Bengio for their contributions to deep learning.

Research at AMI will focus on building AI systems that can reason, plan actions and maintain long-term memory. Possible applications range from robotics and industrial automation to healthcare and wearable technologies, areas where dependable AI could have a major impact.

LeCun and his team argue that genuine intelligence cannot emerge from language alone. Understanding the world, they say, requires machines that learn directly from it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU charts roadmap for tokenised financial markets

The European Central Bank (ECB) has unveiled Appia, a strategic roadmap for developing Europe’s tokenised financial ecosystem anchored in central bank money. The initiative aims to guide the shift from traditional finance to tokenised markets while ensuring stability and interoperability.

A key component of Appia is Pontes, the Eurosystem’s distributed ledger technology (DLT) settlement solution. Pontes, set for Q3 2026 pilots, will enable central bank money transactions and connect DLT infrastructures with the Eurosystem’s TARGET2, T2S, and TIPS services.

The ECB has opened a public consultation inviting feedback and proposals from both public and private sector stakeholders. Respondents’ input will help refine the roadmap and shape the long-term blueprint for Europe’s tokenised financial system.

Appia also complements ongoing efforts on the digital €, with payment service provider selection planned for 2026 and a 12-month pilot trial in the second half of 2027.

The initiative highlights the ECB’s commitment to integrating emerging technologies while preserving financial stability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK watchdog demands stronger child safety on social platforms

The British communications regulator Ofcom has called on major technology companies to enforce stricter age controls and improve safety protections for children using online platforms.

The warning targets services widely used by young audiences, including Facebook, Instagram, Roblox, Snapchat, TikTok and YouTube.

Regulators said that despite existing minimum age policies, large numbers of children under the age of 13 continue to access platforms intended for older users.

According to Ofcom research, more than 70 percent of children aged 8 to 12 regularly use such services.

Authorities have asked companies to demonstrate how they will strengthen protections and ensure compliance with minimum age requirements.

Platforms must present their plans by 30 April, after which Ofcom will publish an assessment of their responses and determine whether further regulatory action is necessary.

The regulator also outlined several key areas requiring improvement.

Companies in the UK are expected to implement more effective age-verification systems, strengthen protections against online grooming and ensure that recommendation algorithms do not expose children to harmful content.

Another concern involves product development practices.

Ofcom warned that new digital features, including AI tools, should not be tested on children without adequate safety assessments. Platforms are required to evaluate potential risks before launching significant updates.

The measures are part of the UK’s broader regulatory framework introduced under the Online Safety Act, which aims to reduce exposure to harmful online material.

The law requires platforms to prevent children from accessing content linked to pornography, suicide, self-harm and eating disorders, while limiting the promotion of violent or abusive material in recommendation feeds.

Ofcom indicated that enforcement action may follow if companies fail to demonstrate meaningful improvements. Regulators argue that stronger safeguards are necessary to restore public trust and ensure that digital platforms prioritise child safety in their design and operation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI-powered Copilot Health platform introduced by Microsoft

Microsoft has introduced Copilot Health, a new feature that uses AI to help users interpret personal health data and prepare for medical consultations.

The tool will operate as a separate and secure environment within Microsoft’s Copilot ecosystem, allowing users to combine health records, wearable data, and medical history into a single profile. The system then uses AI to analyse patterns and generate personalised insights intended to support conversations with healthcare professionals.

Microsoft said the feature aims to help people better understand existing medical information rather than replace clinical care. Users can review trends such as sleep patterns, activity levels, and vital signs gathered from wearable devices, alongside test results and visit summaries from healthcare providers.

Copilot Health can integrate data from more than 50 wearable devices, including systems connected through platforms such as Apple Health, Fitbit, and Oura. The platform can also access health records from over 50,000 US hospitals and provider organisations through HealthEx, as well as laboratory test results from Function.

According to Microsoft, the system builds on ongoing research into medical AI systems, including work on the Microsoft AI Diagnostic Orchestrator (MAI-DxO). The company said future publications will explore how such systems could assist in analysing complex medical cases.

Privacy and security are central elements of the design. Microsoft stated that Copilot Health data and conversations are stored separately from standard Copilot interactions and protected through encryption and access controls. The company also noted that health information used in the service will not be used to train AI models.

Development of the system involves Microsoft’s internal clinical team and an external advisory group of more than 230 physicians from 24 countries. The company said Copilot Health has also achieved ISO/IEC 42001 certification, a standard focused on the governance of AI management systems.

The feature is being introduced through a phased rollout, beginning with a waitlist for early users who will help shape the service as it develops.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU privacy watchdogs warn over US plans to expand traveller data collection

European privacy authorities have raised concerns about proposed changes to the Electronic System for Travel Authorisation that could require travellers to the US to disclose extensive personal information, including social media activity.

The European Data Protection Board, which coordinates national data protection authorities across the EU, sent a letter to the European Commission asking whether the institution plans to intervene or respond to the updated requirements.

A proposal that would apply to visitors entering the US through the visa-waiver programme for short stays of up to 90 days.

Under the proposed changes, travellers may be required to provide details about their social media accounts covering the previous five years.

Authorities could also request personal data about family members, including addresses, phone numbers and dates of birth, information that privacy regulators argue is unrelated to travel authorisation.

Watchdogs also questioned how EU citizens could exercise their data protection rights once such information is transferred to US authorities, particularly regarding storage periods and potential misuse.

Parallel negotiations between the EU and the US have also attracted attention.

Discussions around a potential Enhanced Border Security Partnerships framework could allow US authorities to seek access to biometric databases held by European countries, including facial scans and fingerprint records.

European privacy regulators warned that such measures could raise significant concerns regarding fundamental rights and personal data protection for travellers from the EU.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Generative AI in precision oncology faces a trust and safety challenge

A narrative review published in the Journal of Hematology & Oncology examined how generative AI tools could support oncologists in precision cancer care.

In this increasingly data-intensive field, clinicians must cross-reference genomic sequencing results, patient records, imaging findings, and a rapidly expanding body of biomedical literature to inform their decisions.

Researchers found promising results for AI-assisted clinical trial matching and diagnostic report drafting, but also highlighted significant risks that make unsupervised deployment dangerous.

On the positive side, the AI tool TrialGPT demonstrated 87.3% agreement with expert assessments when matching patients to clinical trials, while reducing processing time by an average of 42.6%.

Meanwhile, the vision-language model Flamingo-CXR matched or exceeded the performance of board-certified radiologists in 94% of chest X-ray cases with no clinically relevant findings.

Researchers cautioned, however, that clinically significant errors appeared in 24.8% of evaluated imaging reports, whether AI- or human-generated, underscoring the need for combined oversight.

The review’s authors advocate for ‘Human-in-the-Loop’ workflows, in which human experts review all AI outputs before clinical implementation, and for Retrieval-Augmented Generation techniques that force AI systems to draw on current medical guidelines rather than relying solely on their base training data.

The key conclusion is that AI should function as an assistant to oncologists, not as an autonomous decision maker.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MIT researchers outline future of AI and physical sciences

AI and the mathematical and physical sciences are entering a new phase of collaboration that could accelerate technological progress and scientific discovery. Researchers increasingly see the relationship as a two-way exchange rather than a one-sided use of AI tools.

A 2025 MIT workshop brought together experts from astronomy, chemistry, materials science, mathematics and physics to examine the future of this collaboration.

Discussions resulted in a white paper published in Machine Learning: Science and Technology, outlining strategies for research institutions and funding bodies.

Participants agreed that stronger computing infrastructure, shared data resources and cross-disciplinary research methods are essential for progress. Scientists also improve AI by analysing neural networks, identifying principles and developing new algorithms.

Researchers highlighted the growing importance of so-called ‘centaur scientists’- specialists trained in both AI and traditional scientific disciplines. Universities, including MIT, are expanding interdisciplinary programmes and research initiatives to train experts who can work across AI and scientific fields.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT dynamic visual explanations introduce interactive learning tools

OpenAI has introduced a new ChatGPT feature called dynamic visual explanations, allowing users to interact with mathematical and scientific concepts through real-time visuals.

Instead of relying solely on text explanations or static diagrams, the feature enables users to manipulate formulas and variables and immediately see how those changes affect results. For example, when exploring the Pythagorean theorem, users can adjust the triangle’s sides and see the hypotenuse update instantly.

To use the tool, users can ask ChatGPT questions such as ‘What is a lens equation?’ or ‘How can I find the area of a circle?’ The chatbot responds with both a written explanation and an interactive visual module that users can manipulate directly.

The feature currently supports more than 70 topics in mathematics and science. The topics include binomial squares, Charles’ law, compound interest, Coulomb’s law, exponential decay, Hooke’s law, kinetic energy, linear equations, and Ohm’s law.

OpenAI says it plans to expand the range of topics over time. The feature is already available to all logged-in ChatGPT users. The launch marks a shift in how ChatGPT supports learning. Instead of simply providing answers, the tool now encourages users to explore underlying concepts by experimenting with interactive models.

AI tools have become increasingly common in education, although their role remains widely debated. Some educators worry that students may become overly dependent on AI tools, while others see them as valuable learning aids.

According to OpenAI, more than 140 million people use ChatGPT every week to help with subjects such as mathematics and science, which many learners find challenging. Other technology companies are also experimenting with similar tools. Google’s Gemini introduced interactive diagrams and visual explanations last year.

The new feature joins several other ChatGPT learning tools, including study mode, which guides users through problems step by step, and QuizGPT, which allows users to create flashcards and test themselves before exams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK approves £7.5bn AI data centre campus at Elsham Tech Park

Plans for one of the UK’s largest AI data centre campuses have been approved in North Lincolnshire, denoting a significant investment in digital infrastructure.

The project, known as Elsham Tech Park, will be developed near Elsham Wolds Industrial Estate on the site of the former RAF Elsham Wolds airfield. The development is expected to deliver more than 1.5 million square metres of hyperscale data centre floorspace across 15 data halls, with an estimated construction cost of around £7.5 billion.

If fully developed, the campus could provide up to 1GW (1,000MW) of computing capacity, placing it among the largest proposed AI data centre facilities in the UK. The project is being led by Elsham Tech Park Ltd, a company created for the development and overseen by infrastructure developer Greystoke.

The proposed campus would cover approximately 176 hectares (435 acres) and include an on-site energy centre capable of generating up to 49.9MW of electricity. Plans also include battery storage facilities, substations, district heating infrastructure, and additional commercial space.

The masterplan incorporates a greenhouse complex that reuses excess heat from the data centre servers to support agricultural production. Developers say this approach could improve energy efficiency by enabling greenhouse cultivation using waste heat generated by computing infrastructure.

Construction is expected to begin in 2027, with the first phase of the campus scheduled to open in 2029. The development timeline covers roughly ten years.

During construction, the project could support between 2,600 and 3,600 full-time equivalent jobs annually across on-site and supply chain roles. Once operational, the facility is expected to create around 900 long-term skilled jobs.

North Lincolnshire Council said the project could attract up to £10 billion in investment and strengthen the region’s role in the country’s growing AI and cloud computing infrastructure sector.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!