DIGITALEUROPE urges changes to EU AI Act rules for industry

European industry representatives are urging policymakers to reconsider parts of the EU AI Act, arguing that the current framework could impose significant compliance costs on companies developing AI tools for industrial and medical technologies.

According to Cecilia Bonfeld-Dahl, director-general of DIGITALEUROPE, manufacturers of high-tech machines, medical devices, and radio equipment are already subject to strict product safety regulations. Adding AI-specific requirements could create unnecessary administrative burdens for companies already heavily regulated. She argues that policymakers should aim for balanced AI regulation that encourages innovation while maintaining safety standards.

Industry groups warn that classifying certain AI systems as high-risk under Annex I of the AI Act could be particularly costly for smaller firms. DIGITALEUROPE estimates that a company with around 50 employees developing an AI-based product could incur initial compliance costs of €320,000 to €600,000, followed by annual expenses of up to €150,000. According to the organisation, such costs could reduce profits significantly and discourage smaller companies from pursuing AI innovation.

Manufacturing and medical technology sectors across Europe employ millions of workers and increasingly rely on AI to improve product performance and safety. Industry representatives argue that many applications, such as AI systems used to enhance industrial equipment safety or improve medical devices, already operate under established regulatory frameworks. These existing frameworks could be adapted rather than introducing additional layers of regulation.

The broader regulatory landscape is also contributing to concerns among technology companies. Over the past six years, the EU has introduced nearly 40 new technology-related regulations, some of which overlap or impose similar compliance requirements. DIGITALEUROPE estimates that compliance with the AI Act could cost companies approximately €3.3 billion annually, while cybersecurity and data-sharing regulations add further financial obligations.

Industry leaders warn that rising compliance costs could affect investment in AI development across Europe. Current estimates suggest that the EU accounts for about 7.5% of global AI investment, significantly behind the United States and China.

DIGITALEUROPE has called on the EU institutions to consider postponing parts of the AI Act’s implementation timeline to allow further discussion on how high-risk AI systems should be defined. Supporters of this approach argue that additional consultation could help ensure the regulatory framework protects consumers while also enabling European companies to compete globally in the rapidly evolving AI sector.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU lawmakers move forward on AI Act changes

Members of the European Parliament have reached a preliminary political agreement on amendments to the EU Artificial Intelligence Act. The compromise will be reviewed by parliamentary committees before a scheduled vote in Brussels.

Lawmakers in the EU agreed to extend compliance deadlines for some high risk AI systems. The changes aim to give companies and regulators more time to prepare technical standards and enforcement frameworks.

The proposed amendments also include a ban on AI systems that create non consensual explicit deepfakes. Officials in the EU say the measure aims to strengthen consumer protection and improve online safety for children.

Industry groups in the EU have raised concerns about compliance burdens linked to the revised rules. Policymakers in the EU continue negotiations as the legislation moves toward committee approval.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Civil society urges stronger EU digital fairness rules

More than 200 civil society organisations have urged the European Commission to deliver strong consumer protections through the upcoming Digital Fairness Act. Advocacy groups in the EU say the proposal must address risks created by modern online platforms.

Campaigners argue that many existing EU consumer laws were designed decades ago and no longer reflect the realities of the digital market. The coalition warned policymakers in the EU not to treat regulatory simplification as a path toward deregulation.

Advocates are pushing for binding rules targeting deceptive design practices and addictive digital features. Survey responses across the EU show broad public support for stronger protections against dark patterns and unfair personalisation.

The European Commission is expected to present the Digital Fairness Act later this year. Officials in the EU are also considering expanding enforcement powers to strengthen consumer safeguards online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New venture aims to build AI that understands the real world

AI pioneer Yann LeCun has secured more than $1 billion in funding for a new startup that aims to rethink how AI systems learn about the world.

The venture, called Advanced Machine Intelligence (AMI), will focus on developing AI that learns from real-world signals, such as camera and sensor data, rather than relying primarily on text. According to this French company, such systems could make better decisions by understanding how events unfold in the physical world.

AMI plans to build what researchers call ‘world models’, AI systems designed to predict the consequences of actions before they happen. Developers believe that grounding AI in real-world data could make the technology more reliable and easier to control, especially in critical safety applications.

Operations will span several global research hubs, including Paris, New York City, Montreal and Singapore. The company has already begun assembling its leadership team, appointing entrepreneur Alex LeBrun as chief executive and AI researcher Saining Xie as chief science officer.

Support for the project quickly appeared online. Emmanuel Macron welcomed the launch, saying it represented a new chapter in AI and highlighting the role of researchers and innovators in shaping the technology’s future.

LeCun is widely regarded as one of the key figures behind modern AI. In 2018, he shared the prestigious Turing Award with fellow researchers Geoffrey Hinton and Yoshua Bengio for their contributions to deep learning.

Research at AMI will focus on building AI systems that can reason, plan actions and maintain long-term memory. Possible applications range from robotics and industrial automation to healthcare and wearable technologies, areas where dependable AI could have a major impact.

LeCun and his team argue that genuine intelligence cannot emerge from language alone. Understanding the world, they say, requires machines that learn directly from it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK watchdog demands stronger child safety on social platforms

The British communications regulator Ofcom has called on major technology companies to enforce stricter age controls and improve safety protections for children using online platforms.

The warning targets services widely used by young audiences, including Facebook, Instagram, Roblox, Snapchat, TikTok and YouTube.

Regulators said that despite existing minimum age policies, large numbers of children under the age of 13 continue to access platforms intended for older users.

According to Ofcom research, more than 70 percent of children aged 8 to 12 regularly use such services.

Authorities have asked companies to demonstrate how they will strengthen protections and ensure compliance with minimum age requirements.

Platforms must present their plans by 30 April, after which Ofcom will publish an assessment of their responses and determine whether further regulatory action is necessary.

The regulator also outlined several key areas requiring improvement.

Companies in the UK are expected to implement more effective age-verification systems, strengthen protections against online grooming and ensure that recommendation algorithms do not expose children to harmful content.

Another concern involves product development practices.

Ofcom warned that new digital features, including AI tools, should not be tested on children without adequate safety assessments. Platforms are required to evaluate potential risks before launching significant updates.

The measures are part of the UK’s broader regulatory framework introduced under the Online Safety Act, which aims to reduce exposure to harmful online material.

The law requires platforms to prevent children from accessing content linked to pornography, suicide, self-harm and eating disorders, while limiting the promotion of violent or abusive material in recommendation feeds.

Ofcom indicated that enforcement action may follow if companies fail to demonstrate meaningful improvements. Regulators argue that stronger safeguards are necessary to restore public trust and ensure that digital platforms prioritise child safety in their design and operation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI-powered Copilot Health platform introduced by Microsoft

Microsoft has introduced Copilot Health, a new feature that uses AI to help users interpret personal health data and prepare for medical consultations.

The tool will operate as a separate and secure environment within Microsoft’s Copilot ecosystem, allowing users to combine health records, wearable data, and medical history into a single profile. The system then uses AI to analyse patterns and generate personalised insights intended to support conversations with healthcare professionals.

Microsoft said the feature aims to help people better understand existing medical information rather than replace clinical care. Users can review trends such as sleep patterns, activity levels, and vital signs gathered from wearable devices, alongside test results and visit summaries from healthcare providers.

Copilot Health can integrate data from more than 50 wearable devices, including systems connected through platforms such as Apple Health, Fitbit, and Oura. The platform can also access health records from over 50,000 US hospitals and provider organisations through HealthEx, as well as laboratory test results from Function.

According to Microsoft, the system builds on ongoing research into medical AI systems, including work on the Microsoft AI Diagnostic Orchestrator (MAI-DxO). The company said future publications will explore how such systems could assist in analysing complex medical cases.

Privacy and security are central elements of the design. Microsoft stated that Copilot Health data and conversations are stored separately from standard Copilot interactions and protected through encryption and access controls. The company also noted that health information used in the service will not be used to train AI models.

Development of the system involves Microsoft’s internal clinical team and an external advisory group of more than 230 physicians from 24 countries. The company said Copilot Health has also achieved ISO/IEC 42001 certification, a standard focused on the governance of AI management systems.

The feature is being introduced through a phased rollout, beginning with a waitlist for early users who will help shape the service as it develops.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU competition regulators expand scrutiny across the entire AI ecosystem

Competition authorities in the EU are broadening their oversight of the AI sector, examining every layer of the technology’s value chain.

Speaking at a conference in Berlin, Teresa Ribera explained that regulators are analysing the full ‘AI stack’ instead of focusing solely on consumer applications.

According to the competition chief, scrutiny extends beyond visible AI tools to the systems that support them. Investigations are assessing underlying models, the data used to train those models, as well as cloud infrastructure and energy resources that power AI systems.

Regulatory attention has already reached the application layer.

The European Commission opened an investigation in 2025 involving Meta after concerns emerged that the company could restrict competing AI assistants on its messaging platform WhatsApp.

Following regulatory pressure, Meta proposed allowing rival AI chatbots on the platform in exchange for a fee. European regulators are now assessing the proposal to determine whether additional intervention is necessary to preserve fair competition in rapidly evolving digital markets.

Authorities have also examined concentration risks across other parts of the AI ecosystem, including the infrastructure layer dominated by companies such as Nvidia.

Regulators argue that effective competition oversight must address the entire technology stack as AI markets expand quickly.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Cambridge researchers warn AI toys misread children’s emotions

AI toys for young children may misread emotions and respond inappropriately, according to a study by researchers at the University of Cambridge. Developmental psychologists observed interactions between children aged three to five and conversational AI-powered toys.

Findings showed the toys often struggled with pretend play and emotional cues. In several cases, children attempted to express sadness or initiate imaginative scenarios, while the AI responded with unrelated or overly scripted replies, leaving emotional signals unrecognised.

Researchers warned that such limitations could affect children’s emotional development and imaginative play. Early years practitioners also raised concerns about how toy-collected conversation data may be used and whether children could start treating the devices as trusted companions.

The study calls for stronger regulation and the introduction of safety certification for AI toys aimed at young children. Toy developer Curio stated that improving AI interactions and maintaining parental controls remain priorities as the technology continues to develop.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deepfakes in campaign ads expose limits of Texas election law

AI-generated political advertisements are becoming increasingly visible in Texas election campaigns, highlighting gaps in existing laws designed to regulate deepfakes in political messaging.

Texas was the first state in the United States to adopt legislation restricting the use of deepfakes in campaign advertisements. However, the law applies only to state-level races. It does not cover federal contests, including the US Senate race that has dominated advertising spending in Texas and featured several AI-generated campaign ads.

Some lawmakers and experts warn that the growing use of AI-generated political content could complicate election campaigns. During recent primary contests, campaign advertisements featuring manipulated or synthetic images of political figures circulated widely across media platforms.

State Senator Nathan Johnson, who has proposed legislation to strengthen the state’s rules regarding deepfakes, said the rapid evolution of AI technology makes the issue increasingly urgent. Johnson argues that voters should be able to make decisions based on accurate information rather than manipulated media.

The current Texas law, adopted in 2019, contains several limitations. It only applies to video content, requires proof of intent to deceive or harm a candidate, and covers material distributed within 30 days of an election. Critics say these restrictions make the law difficult to enforce and limit its practical impact.

Lawmakers from both parties attempted to address some of these issues during the most recent legislative session. Proposed reforms included removing the 30-day restriction, requiring clear disclosure when AI is used in political advertising, and allowing candidates to pursue legal action to block misleading ads. Although both chambers of the Texas legislature passed versions of the legislation, the proposals ultimately failed to become law.

Supporters of stricter regulation argue that the rapid advancement of generative AI tools is making it harder to distinguish synthetic media from authentic content. Some political leaders warn that increasingly realistic deepfakes could eventually influence election outcomes.

Others, however, caution that regulating political content raises constitutional concerns. Some lawmakers argue that many AI-generated political ads resemble satire or parody, forms of political speech protected by the First Amendment.

At the federal level, regulation of congressional campaign advertising falls under the Federal Election Commission’s authority. In 2024, the agency declined to begin a formal rulemaking process on AI-generated political ads, leaving states and policymakers to continue debating how to address the emerging issue.

Experts warn that as AI tools continue to improve, distinguishing authentic political messaging from deepfakes and other forms of synthetic content will likely become more complex.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Biased AI suggestions shift societal attitudes

AI-powered writing tools may do more than speed up typing- they can influence the way people think. A Cornell study found that biassed autocomplete suggestions can subtly shift users’ opinions on issues like the death penalty, fracking, GMOs, and voting rights.

Experiments with over 2,500 participants revealed that users’ views gravitated toward the AI’s predetermined bias. Attempts to warn participants about the AI’s bias, either before or after writing, did not prevent the shifts.

Researchers noted that the effect occurs because users effectively write biassed viewpoints themselves, a process psychology research shows can alter personal attitudes.

The influence was consistent across political topics and participants of all leanings. Compared with simply providing pre-written arguments, biassed AI suggestions had a stronger effect on shaping opinions.

Researchers warn that as autocomplete and generative AI tools become increasingly prevalent, covert persuasion through AI may pose serious societal risks.

The study, led by Sterling Williams-Ceci and Mor Naaman of Cornell Tech, highlights the potential for AI to shape beliefs without users noticing. Findings highlight the need for oversight as AI writing assistants enter everyday communication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot