X faces EU probe over AI data use

Elon Musk’s X platform is under formal investigation by the Irish Data Protection Commission over its alleged use of public posts from EU users to train the Grok AI chatbot.

The probe is centred on whether X Internet Unlimited Company, the platform’s newly renamed Irish entity, has adhered to key GDPR principles while sharing publicly accessible data, like posts and interactions, with its affiliate xAI, which develops the chatbot.

Concerns have grown over the lack of explicit user consent, especially as other tech giants such as Meta signal similar data usage plans.

A move like this is part of a wider regulatory push in the EU to hold AI developers accountable instead of allowing unchecked experimentation. Experts note that many AI firms have deployed tools under a ‘build first, ask later’ mindset, an approach at odds with Europe’s strict data laws.

Should regulators conclude that public data still requires user consent, it could force a dramatic shift in how AI models are developed, not just in Europe but around the world.

Enterprises are now treading carefully. The investigation into X is already affecting AI adoption across the continent, with legal and reputational risks weighing heavily on decision-makers.

In one case, a Nordic bank halted its AI rollout midstream after its legal team couldn’t confirm whether European data had been used without proper disclosure. Instead of pushing ahead, the project was rebuilt using fully documented, EU-based training data.

The consequences could stretch far beyond the EU. Ireland’s probe might become a global benchmark for how governments view user consent in the age of data scraping and machine learning.

Instead of enforcement being region-specific, this investigation could inspire similar actions from regulators in places like Singapore and Canada. As AI continues to evolve, companies may have no choice but to adopt more transparent practices or face a rising tide of legal scrutiny.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US exempts key electronics from China import taxes

Smartphones, computers, and key tech components have been granted exemption from the latest round of US tariffs, providing relief to American technology firms heavily reliant on Chinese manufacturing.

The decision, which includes products such as semiconductors, solar cells, and memory cards, marks the first major rollback in President Donald Trump’s trade war with China.

The exemptions, retroactively effective from 5 April, come amid concerns from US tech giants that consumer prices would soar.

Analysts say this move could be a turning point, especially for companies like Apple and Nvidia, which source most of their hardware from China. Industry reaction has been overwhelmingly positive, with suggestions that the policy shift could reshape global tech supply chains.

Despite easing tariffs on electronics, Trump has maintained a strict stance on Chinese trade, citing national security and economic independence.

The White House claims the reprieve gives firms time to shift manufacturing to the US. However, electronic goods will still face a separate 20% tariff due to China’s ties to fentanyl-related trade. Meanwhile, Trump insists high tariffs are essential leverage to renegotiate fairer global trade terms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Benchmark backlash hits Meta’s Maverick model

Meta’s latest open-source language model, Llama 4 Maverick, has ranked poorly on a widely used AI benchmark after the company was criticised for initially using a heavily modified, unreleased version to boost its results.

LM Arena, the platform where the performance was measured, has since updated its rules and retested Meta’s vanilla version.

The plain Maverick model, officially named ‘Llama-4-Maverick-17B-128E-Instruct,’ placed behind older competitors such as OpenAI’s GPT-4o, Anthropic’s Claude 3.5 Sonnet, and Google’s Gemini 1.5 Pro.

Meta admitted that the stronger-performing variant used earlier had been ‘optimised for conversationality,’ which likely gave it an unfair advantage in LM Arena’s human-rated comparisons.

Although LM Arena’s reliability as a performance gauge has been questioned, the controversy has raised concerns over transparency and benchmarking practices in the AI industry.

Meta has since released its open-source model to developers, encouraging them to customise it for real-world use and provide feedback.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta under fire for scrapping diversity and moderation policies

The NAACP Legal Defense Fund (LDF) has withdrawn from Meta’s civil rights advisory group, citing deep concerns over the company’s rollback of diversity, equity and inclusion (DEI) policies and changes to content moderation.

The decision follows Meta’s January announcement that it would end DEI programmes, eliminate factchecking teams, and revise moderation rules across its platforms.

Civil rights organisations, including LDF, expressed alarm at the time, warning that the changes could silence marginalised voices and increase the risk of online harm.

In a letter to Meta CEO Mark Zuckerberg, they criticised the company for failing to consult the advisory group or consider the impact on protected communities. LDF’s Todd A Cox later said the policy shift posed a ‘grave risk’ to Black communities and public discourse.

LDF also noted that the company had seen progress under previous DEI policies, including a significant increase in Black and Hispanic employees.

Its reversal, the group argues, may breach federal civil rights laws and expose Meta to legal consequences.

LDF urged Meta to assess the effects of its policy changes and increase transparency about how harmful content is reported and removed. Meta has not commented publicly on the matter.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mood-based AI search tool tested by Netflix

Netflix is testing a new AI-powered search tool that could transform how users discover content on the platform.

Developed in collaboration with OpenAI, the feature goes beyond traditional search methods by allowing subscribers to use natural language queries based on mood, themes or descriptions rather than just titles or actors.

Currently, the tool is available only to a limited number of users in Australia and New Zealand using iOS devices, with opt-in access required. Netflix plans to expand the test to more regions, including the United States, in the near future.

The move highlights the streaming giant’s growing investment in AI, which it already uses for personalised recommendations.

Despite embracing AI, Netflix has stated it does not intend to replace creatives with technology. The company has publicly acknowledged concerns from the film and television industry, promising that writers, actors, and filmmakers remain central to its content creation strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE experts warn on AI privacy risks in art apps

A surge in AI applications transforming selfies into Studio Ghibli-style artwork has captivated social media, but UAE cybersecurity experts are raising concerns over privacy and data misuse.

Dr Mohamed Al Kuwaiti, Head of Cybersecurity for the UAE Government, warned that engaging with unofficial apps could lead to breaches or leaks of personal data. He emphasised that while AI’s benefits are clear, users must understand how their personal data is handled by these platforms.

He called for strong cybersecurity standards across all digital platforms, urging individuals to be more cautious with their data.

Media professionals are also sounding alarms. Adel Al-Rashed, an Emirati journalist, cautioned that free apps often mimic trusted platforms but could exploit user data. He advised users to stick to verified applications, noting that paid services, like ChatGPT’s Pro edition, offer stronger privacy protections.

While acknowledging the risks, social media influencer Ibrahim Al-Thahli highlighted the excitement AI brings to creative expression. He urged users to focus on education and safe engagement with the technology, underscoring the UAE’s goal to build a resilient digital economy.

For more information on these topics, visit diplomacy.edu.

AI transforms global healthcare with major growth ahead

The healthcare sector is poised for significant growth as AI continues to revolutionise the industry. A new report from Avant Technologies predicts an influx of AI-powered solutions in healthcare, with key technology giants leading the charge.

Avant Technologies and Ainnova, in their joint venture, plan to showcase their AI-powered Vision AI platform at the 2025 Mexico Healthcare Innovation Summit.

The platform, aimed at early disease detection, is nearing approval from the US Food and Drug Administration (FDA) and is already in clinical trials in Southeast Asia and South America.

Apple and Amazon are also entering the AI healthcare space, with Apple launching an AI-powered health coach to guide users on diet and exercise, while Amazon is expanding its AI solutions with a healthcare chatbot.

Meanwhile, GE Healthcare has seen success with its AI-driven cardiac imaging, which has garnered FDA approval. The World Health Organization (WHO) supports AI integration in healthcare, particularly for outpatient care and early diagnosis, though it has urged regulators to be cautious of potential risks.

AI in healthcare is expected to grow exponentially, reaching a market valuation of $613 billion by 2034. The sector’s rapid expansion is driven by increasing adoption rates, particularly for early disease detection, administrative efficiency, and personalised medicine.

Despite data privacy concerns, the adoption of AI tools in fields like dermatology, oncology, and cardiovascular health is expected to surge. North America is predicted to lead the market, followed by Europe and South Asia, as more healthcare institutions embrace AI technologies.

For more information on these topics, visit diplomacy.edu.

Hackers leak data from Indian software firm in major breach

A major cybersecurity breach has reportedly compromised a software company based in India, with hackers claiming responsibility for stealing nearly 1.6 million rows of sensitive data on 19 December 2024.

A hacker identified as @303 is said to have accessed and exposed customer information and internal credentials, with the dataset later appearing on a dark web forum via a user known as ‘frog’.

The leaked data includes email addresses linked to major Indian insurance providers, contact numbers, and possible administrative access credentials.

Analysts found that the sample files feature information tied to employees of companies such as HDFC Ergo, Bajaj Allianz, and ICICI Lombard, suggesting widespread exposure across the sector.

Despite the firm’s stated dedication to safeguarding data, the incident raises doubts about its cybersecurity protocols.

The breach also comes as India’s insurance regulator, IRDAI, has begun enforcing stricter cyber measures. In March 2025, it instructed insurers to appoint forensic auditors in advance and perform full IT audits instead of waiting for threats to surface.

A breach like this follows a string of high-profile incidents, including the Star Health Insurance leak affecting 31 million customers.

With cyberattacks in India up by 261% in early 2024 and the average cost of a breach now ₹19.5 crore, experts warn that insurance firms must adopt stronger protections instead of relying on outdated defences.

For more information on these topics, visit diplomacy.edu.

Meta faces landmark antitrust trial

An antitrust trial against Meta commenced in Washington, with the US Federal Trade Commission (FTC) arguing that the company’s acquisitions of Instagram in 2012 and WhatsApp in 2014 were designed to crush competition instead of fostering innovation.

Although the FTC initially approved these deals, it now claims they effectively handed Meta a monopoly. Should the FTC succeed, Meta may be forced to sell off both platforms, a move that would reshape the tech landscape.

Meta has countered by asserting that users have benefited from Instagram’s development under its ownership, instead of being harmed by diminished competition. Legal experts believe the company will focus on consumer outcomes rather than corporate intent.

Nevertheless, statements made by Meta CEO Mark Zuckerberg, such as his remark that it’s ‘better to buy than to compete,’ may prove pivotal. Zuckerberg and former COO Sheryl Sandberg are both expected to testify during the trial, which could span several weeks in the US.

Political tensions loom over the case, which was first launched under Donald Trump’s presidency. Reports suggest Zuckerberg has privately lobbied Trump to drop the lawsuit, while Meta has criticised the FTC’s reversal years after approving the acquisitions.

The recent dismissal of two Democratic commissioners from the FTC by Trump has raised concerns over political interference, especially as the commission now holds a Republican majority.

While the FTC seeks to challenge Meta’s dominance, experts caution that proving harm in this case will be far more difficult than in the ongoing antitrust battle against Google.

Unlike the search engine market, which is clearly monopolised, the social media space remains highly competitive, with platforms like TikTok, YouTube and X offering strong alternatives.

For more information on these topics, visit diplomacy.edu.

Google rolls out AI to improve US power grid connections

Google has announced a partnership with PJM Interconnection, the largest electricity grid operator in North America, to deploy AI aimed at reducing delays in connecting new power sources to the grid. The move comes as energy demand surges due to the expansion of data centres required for AI development.

Wait times for connecting renewable and traditional energy sources, such as wind, solar and gas, have reached record levels, increasing the risk of blackouts and rising energy costs in the US. Google’s AI technology, developed alongside Alphabet-backed Tapestry, will streamline and automate key planning processes traditionally handled manually by grid operators.

Initial deployment will focus on automating tasks like assessing project viability, which are currently time-consuming. Over time, the project aims to create a digital model of PJM’s grid, similar to Google Maps, allowing planners to view layered data and make faster, more informed decisions.

While it is too early to quantify exactly how much time will be saved, the collaboration is expected to gradually improve planning efficiency. PJM’s grid serves 67 million people, including the world’s largest data centre hub in northern Virginia, making this a critical step toward modernising the energy infrastructure needed to support the AI era.

For more information on these topics, visit diplomacy.edu.