EU draft AI code faces industry pushback

The tech industry remains concerned about a newly released draft of the Code of Practice on General-Purpose Artificial Intelligence (GPAI), which aims to help AI providers comply with the EU‘s AI Act.

The proposed rules, which cover transparency, copyright, risk assessment, and mitigation, have sparked significant debate, especially among copyright holders and publishers.

Industry representatives argue that the draft still presents serious issues, particularly regarding copyright obligations and external risk assessments, which they believe could hinder innovation.

Tech lobby groups, such as the CCIA and DOT Europe, have expressed dissatisfaction with the latest draft, highlighting that it continues to impose burdensome requirements beyond the scope of the original AI Act.

Notably, the mandatory third-party risk assessments both before and after deployment remain a point of contention. Despite some improvements in the new version, these provisions are seen as unnecessary and potentially damaging to the industry.

Copyright concerns remain central, with organisations like News Media Europe warning that the draft still fails to respect copyright law. They argue that AI companies should not be merely expected to make ‘best efforts’ not to use content without proper authorisation.

Additionally, the draft is criticised for failing to fully address fundamental rights risks, which, according to experts, should be a primary concern for AI model providers.

The draft is open for feedback until 30 March, with the final version expected to be released in May. However, the European Commission’s ability to formalise the Code under the AI Act, which comes into full effect in 2027, remains uncertain.

Meanwhile, the issue of copyright and AI is also being closely examined by the European Parliament.

For more information on these topics, visit diplomacy.edu.

Google enhances Gemini AI with smarter personalisation

Google has announced an update to its Gemini AI assistant, enhancing personalisation to better anticipate user needs and deliver responses that feel more like those of a personal assistant.

The feature, initially available on desktop before rolling out to mobile, allows Gemini to offer tailored recommendations, such as travel ideas, based on search history and, in the future, data from apps like Photos and YouTube.

Users can opt in to the new personalisation features, sharing details like dietary preferences or past conversations to refine responses further.

Google assures that users must explicitly grant permission for Gemini to access search history and other services, and they can disconnect at any time.

However, this level of contextual awareness could give Google an advantage over competitors like ChatGPT by leveraging its vast ecosystem of user data.

The update signals a shift in how users interact with AI, bringing it closer to traditional search while raising questions for publishers and SEO professionals.

As Gemini increasingly provides direct, personalised answers, it may reduce the need for users to visit external websites. While currently experimental, the potential for Google to push broader adoption of AI-driven personalisation could reshape digital content discovery and search behaviour in the future.

For more information on these topics, visit diplomacy.edu.

FTC confirms no delay in Amazon trial

The US Federal Trade Commission (FTC) announced on Wednesday that it does not need to delay its September trial against Amazon, contradicting an earlier claim by one of its attorneys about resource shortages.

Jonathan Cohen, an FTC lawyer, retracted his statement that cost-cutting measures had strained the agency’s ability to proceed, assuring the court that the FTC is fully prepared to litigate the case.

FTC Chairman Andrew Ferguson reaffirmed the agency’s commitment, dismissing concerns over budget constraints and stating that the FTC will not back down from taking on Big Tech.

Earlier in the day, Cohen had described a ‘dire resource situation,’ citing employee resignations, a hiring freeze, and restrictions on legal expenses. However, he later clarified that these challenges would not impact the case.

The lawsuit, filed in 2023, accuses Amazon of using ‘dark patterns’ to mislead consumers into enrolling in automatically renewing Prime subscriptions, a program with over 200 million users.

With claims exceeding $1 billion, the trial is expected to be a high-profile battle between regulators and one of the world’s largest tech companies. Amazon has denied any wrongdoing, and three of its senior executives are also named in the case.

For more information on these topics, visit diplomacy.edu.

Mark Cuban: AI is a tool, not the answer

Mark Cuban, the tech entrepreneur and investor, spoke at the SXSW conference, where he highlighted the importance of AI for small businesses. He stressed that while AI can be a valuable tool, it should never be seen as the ultimate answer to business success. Cuban explained that AI can help entrepreneurs by making it easier to start and grow businesses, answering questions, and aiding in tasks like research, emails, and sales calls. However, he cautioned against over-relying on AI.

Cuban encouraged entrepreneurs to spend time learning about AI, pointing out how much easier it is to start a business today compared to the past, thanks to the availability of AI tools and internet access. He acknowledged that AI can make mistakes and isn’t perfect, but noted that human experts can also be wrong. In creative fields, Cuban argued that while AI can help with certain tasks like video creation, it’s not a substitute for human creativity, especially when it comes to things like writing scripts or generating quality art.

The tech mogul highlighted that AI should amplify human skills, not replace them. He warned that those who neglect to use AI might find themselves at a disadvantage, as competitors who utilise AI will have the edge.

For more information on these topics, visit diplomacy.edu.

Spain approves bill to regulate AI-generated content

Spain’s government has approved a bill imposing heavy fines on companies that fail to label AI-generated content, aiming to combat the spread of deepfakes.

The legislation, which aligns with the European Union’s AI Act, classifies non-compliance as a serious offence, with penalties reaching up to €35 million or 7% of a company’s global revenue.

Digital Transformation Minister Oscar Lopez stressed that AI can be a force for good but also a tool for misinformation and threats to democracy.

The bill also bans manipulative AI techniques, such as subliminal messaging targeting vulnerable groups, and restricts the use of AI-driven biometric profiling, except in cases of national security.

Spain is one of the first EU nations to implement these strict AI regulations, going beyond the looser US approach, which relies on voluntary compliance.

A newly established AI supervisory agency, AESIA, will oversee enforcement, alongside sector-specific regulators handling privacy, financial markets, and law enforcement concerns.

For more information on these topics, visit diplomacy.edu.

The future of digital regulation between the EU and the US

Understanding the DMA and DSA regulations

The Digital Markets Act (DMA) and the Digital Services Act (DSA) are two major regulatory frameworks introduced by the EU to create a fairer and safer digital environment. While both fall under the broader Digital Services Act package, they serve distinct purposes.

The DMA focuses on ensuring fair competition by regulating large online platforms, known as gatekeepers, which have a dominant influence on digital markets. It prevents these companies from engaging in monopolistic practices, such as self-preferencing their own services, restricting interoperability, or using business data unfairly. The goal is to create a more competitive landscape where smaller businesses and consumers have more choices.

On the other hand, the DSA is designed to make online spaces safer by holding platforms accountable for illegal content, misinformation, and harmful activities. It imposes stricter content moderation rules, enhances transparency in digital advertising, and ensures better user rights protection. Larger platforms with significant user bases face even greater responsibilities under this act.

A blue background with yellow stars and dots

The key difference in regulation is that the DMA follows an ex-ante approach, meaning it imposes strict rules on gatekeepers before unfair practices occur. The DSA takes an ex-post approach, requiring platforms to monitor risks and take corrective action after problems arise. This means the DMA enforces competition while the DSA ensures online safety and accountability.

A key component of the DSA Act package is its emphasis on transparency and user rights. Platforms must explain how their algorithms curate content, prevent the use of sensitive data for targeted advertising, and prohibit manipulative design practices such as misleading cookie banners. The most powerful platforms, classified as Very Large Online Platforms (VLOPs) or Very Large Online Search Engines (VLOSEs), are also required to assess and report on ‘systemic risks’ linked to their services, including threats to public safety, democratic discourse, and mental well-being. However, these reports often lack meaningful detail, as illustrated by TikTok’s inadequate assessment of its role in election-related misinformation.

Enforcement is critical to the success of the DSA. While the European Commission directly oversees the largest platforms, national regulators, known as Digital Services Coordinators (DSCs), play a key role in monitoring compliance. However, enforcement challenges remain, particularly in countries like Germany, where understaffing raises concerns about effective regulation. Across the EU, over 60 enforcement actions have already been launched against major tech firms, yet Silicon Valley’s biggest players are actively working to undermine European rules.

Together, the DMA and the DSA reshape how Big Tech companies operate in the EU, fostering competition and ensuring a safer and more transparent digital ecosystem for users.

Trump and Silicon Valley’s fight against EU regulations

The close relationship between Donald Trump and the Silicon Valley tech elite has significantly influenced US policy towards European digital regulations. Since Trump’s return to office, Big Tech executives have actively lobbied against these regulations and have urged the new administration to defend tech firms from what he calls EU ‘censorship.’

 People, Person, Head, Face, Adult, Male, Man, Accessories, Formal Wear, Tie, Crowd, Clothing, Suit, Bride, Female, Wedding, Woman, Indoors, Elon Musk, Jeff Bezos, Sundar Pichai, Mark Zuckerberg, Laura Sánchez, Sean Duffy, Marco Rubio, Priscilla Chan, Doug Collins

Joel Kaplan, Meta’s chief lobbyist, has gone as far as to equate EU regulations with tariffs, a stance that aligns with the Trump administration’s broader trade war strategy. The administration sees these regulations as barriers to US technological dominance, arguing that the EU is trying to tax and control American innovation rather than foster its own competitive tech sector.

Figures like Elon Musk and Mark Zuckerberg have aligned themselves with Trump, leveraging their influence to oppose EU legislation such as the DSA. Meta’s controversial policy changes and Musk’s X platform’s lax approach to content moderation illustrate how major tech firms are resisting regulatory oversight while benefiting from Trump’s protectionist stance.

The White House and the House Judiciary Committee have raised concerns that these laws unfairly target American technology companies, restricting their ability to operate in the European market.

Brendan Carr, chairman of the FCC, has recently voiced strong concerns regarding the DSA, which he argues could clash with America’s free speech values. Speaking at the Mobile World Congress in Barcelona, Carr warned that its approach to content moderation might excessively limit freedom of expression. His remarks reflect a broader criticism from US officials, as Vice President JD Vance had also denounced European content moderation at a recent AI summit in Paris, labelling it as ‘authoritarian censorship.’

These officials argue that the DMA and the DSA create barriers that limit American companies’ innovations and undermine free trade. In response, the House Judiciary Committee has formally challenged the European Commission, stating that certain US products and services may no longer be available in Europe due to these regulations. Keep in mind that the Biden administration also directed its trade and commerce departments to investigate whether these EU laws restrict free speech and recommend countermeasures.

Recently, US President Donald Trump has escalated tensions with the EU threatening tariffs in retaliation for what he calls ‘overseas extortion.’ The memorandum signed by Trump on 21 February 2025, directs the administration to review EU and UK policies that might force US tech companies to develop or use products that ‘undermine free speech or foster censorship.’ The memo also aims at Digital Services Taxes (DSTs), claiming that foreign governments unfairly tax US firms ‘simply because they operate in foreign markets.’

 Pen, Adult, Male, Man, Person, People, Accessories, Formal Wear, Tie, donald trump

EU’s response: Digital sovereignty at stake

However, the European Commission insists that these taxes are applied equally to all large digital companies, regardless of their country of origin, ensuring fair contributions from businesses profiting within the EU. It has also defended its regulations, arguing that they promote fair competition and protect consumer rights.

EU officials see these policies as fundamental to Europe’s digital sovereignty, ensuring that powerful tech firms operate transparently and fairly in the region. As they push back against what they see as US interference and tensions rise, the dispute over how to regulate Big Tech could shape the future of digital markets and transatlantic trade relations.

Eventually, this clash could lead to a new wave of trade conflicts between the USA and the EU, with potential economic and geopolitical consequences for the global tech industry. With figures like JD Vance and Jim Jordan also attacking the DSA and the DMA, and Trump himself framing EU regulations as economic warfare, Europe faces mounting pressure to weaken its tech laws. Additionally, the withdrawal of the EU Artificial Intelligence Liability Directive (AILD) following the Paris AI Summit and JD Vance’s refusal to sign a joint AI statement raised more concerns about Europe’s ability to resist external pushback. The risk that Trump will use economic and security threats, including NATO involvement, as leverage against EU enforcement underscores the urgency of a strong European response.

Another major battleground is the AI regulation. The EU’s AI Act is one of the world’s first comprehensive AI laws, setting strict guidelines for AI transparency, risk assessment, and data usage. Meanwhile, the USA has taken a more industry-led approach, with minimal government intervention.

A blue flag with yellow stars and a circle of yellow stars

This regulatory gap could create further tensions as European lawmakers demand compliance from American AI firms. The recent withdrawal of the EU Artificial Intelligence Liability Directive (AILD) under US pressure highlights how external lobbying can influence European policymaking.

However, if the EU successfully enforces its AI rules, it could set a global precedent, forcing US firms to comply with European standards if they want to operate in the region. This scenario mirrors what happened with the GDPR (General Data Protection Regulation), which led to global changes in privacy policies.

To counter the growing pressure, the EU remains steadfast – as we speak – in enforcing the DSA, the DMA, and the AI Act, ensuring that regulatory frameworks are not compromised under US influence. Beyond regulation, Europe must also bolster its digital industrial capabilities to keep pace. The EUR 200 billion AI investment is a step in the right direction, but Europe requires more resilient digital infrastructures, stronger back-end technologies, and better support for its tech companies.

Currently, the EU is doubling down on its push for digital sovereignty by investing in:

  • Cloud computing infrastructure to reduce reliance on US providers (e.g., AWS, Microsoft Azure)
  • AI development and semiconductor manufacturing (through the European Chips Act)
  • Alternative social media platforms and search engines to challenge US dominance

These efforts aim to lessen European dependence on US Big Tech and create a more self-sufficient digital ecosystem.

The future of digital regulations

Despite the escalating tensions, both the EU and the USA recognise the importance of transatlantic tech cooperation. While their regulatory approaches differ significantly, there are areas where collaboration could still prevail. Cybersecurity remains a crucial issue, as both sides face growing threats from several countries. Strengthening cybersecurity partnerships could provide a shared framework for protecting critical infrastructure and digital ecosystems. Another potential area for collaboration is the development of joint AI safety standards, ensuring that emerging technologies are regulated responsibly without stifling innovation. Additionally, data-sharing agreements remain essential to maintaining smooth digital trade and cross-border business operations.

Past agreements, such as the EU-US Data Privacy Framework, have demonstrated that cooperation is possible. However, whether similar compromises can be reached regarding the DMA, the DSA, and the AI Act remains uncertain. Fundamental differences in regulatory philosophy continue to create obstacles, with the EU prioritising consumer protection and market fairness while the USA maintains a more business-friendly, innovation-driven stance.

Looking ahead, the future of digital regulations between the EU and the USA is likely to remain contentious. The European Union appears determined to enforce stricter rules on Big Tech, while the United States—particularly under the Trump administration—is expected to push back against what it perceives as excessive European regulatory influence. Unless meaningful compromises are reached, the global internet may further fragment into distinct regulatory zones. The European model would emphasise strict digital oversight, strong privacy protections, and policies designed to ensure fair competition. The USA, in contrast, would continue to prioritise a more business-led approach, favouring self-regulation and innovation-driven policies.

big tech 4473ae

As the digital landscape evolves, the coming months and years will be crucial in determining whether the EU and the USA can find common ground on tech regulation or whether their differences will lead to deeper division. The stakes are high, affecting not only businesses but also consumers, policymakers, and the broader future of the global internet. The path forward remains uncertain, but the decisions made today will shape the structure of the digital world for generations to come.

Ultimately, the outcome of this ongoing transatlantic dispute could have wide-reaching implications, not only for the future of digital regulation but also for global trade relations. While the US government and the Silicon Valley tech elite are likely to continue their pushback, the EU appears steadfast in its determination to ensure that its digital regulations are enforced to maintain a fair and safe digital ecosystem for all users. As this global battle unfolds, the world will be watching as the EU and USA navigate the evolving landscape of digital governance.

New digital health file system revolutionises medical data management in Greece

A new electronic health file system is launching on Tuesday in a preliminary form, aiming to provide doctors with an easier, safer, and more reliable way to access Greek patients’ medical histories.

The platform, expected to be fully operational by the end of the year, will store comprehensive records for every patient with a social security number (AMKA).

Once completed, the system will compile detailed medical histories, including hospital admissions, surgeries, diagnostic tests, prescriptions, vaccinations, allergies, and treatment protocols.

Upgrade like this one will significantly streamline healthcare access for both doctors and patients.

The enhanced MyHealth app will eliminate the need for patients to carry test results or verbally summarise their medical history.

It is particularly expected to benefit people with disabilities, as the entire process of claiming benefits will be handled electronically, removing the need for in-person evaluations by specialist committees.

For more information on these topics, visit diplomacy.edu.

Nagasaki University launches AI program for medical student training

Nagasaki University in southwestern Japan, in collaboration with a local systems development company, has unveiled a new AI program aimed at enhancing medical student training.

The innovative program allows students to practice interviews with virtual patients on a screen, addressing the growing difficulty of securing simulated patients for training, especially in regional areas facing population declines.

In a demonstration earlier this month, an AI-powered virtual patient exhibited symptoms such as fever and cough, responding appropriately to questions from a medical student.

Scheduled for introduction by March 2026, the technology will allow students to interact with virtual patients of different ages, genders, and symptoms, enhancing their learning experience.

The university plans to enhance the program with scoring and feedback functions to make the training more efficient and improve the quality of learning.

Shinya Kawashiri, an associate professor at the university’s School of Medicine, expressed hope that the system would lead to more effective study methods.

Toru Kobayashi, a professor at the university’s School of Information and Data Sciences, highlighted the program as a groundbreaking initiative in Japan’s medical education landscape.

For more information on these topics, visit diplomacy.edu.

NHS looks into Medefer data flaw after security concerns

NHS is investigating allegations that a software flaw at private medical services company Medefer left patient data vulnerable to hacking.

The flaw, discovered in November, affected Medefer’s internal patient record system in the UK, which handles 1,500 NHS referrals monthly.

A software engineer who found the issue believes the vulnerability may have existed for six years, but Medefer denies this claim, stating no data has been compromised.

The engineer discovered that unprotected application programming interfaces (APIs) could have allowed outsiders to access sensitive patient information.

While Medefer has insisted that there is no evidence of any breach, they have commissioned an external security agency to review their systems. The agency confirmed that no breach was found, and the company asserts that the flaw was fixed within 48 hours of being discovered.

Cybersecurity experts have raised concerns about the potential risks posed by the flaw, emphasising that a proper investigation should have been conducted immediately.

Medefer reported the issue to the Information Commissioner’s Office (ICO) and the Care Quality Commission (CQC), both of which found no further action necessary. However, experts suggest that a more thorough response could have been beneficial given the sensitive nature of the data involved.

For more information on these topics, visit diplomacy.edu.

X faces major outage in the US and UK

Social media platform X is experiencing widespread outages in the US and the UK, with thousands of users reporting issues, according to outage tracking website Downdetector.

Reports indicate over 21,000 incidents in the US and more than 10,800 in the UK, suggesting significant disruptions.

Downdetector, which gathers status reports from various sources, noted that the actual number of affected users may be higher.

Many have turned to other platforms to discuss the outage, but X has not yet responded to requests for comment.

The cause of the disruption remains unclear, and there is no official timeline for when full service will be restored. Users continue to face difficulties accessing the platform, impacting communication and social media activity globally.

For more information on these topics, visit diplomacy.edu.