X faces EU probe over AI data use

Elon Musk’s X platform is under formal investigation by the Irish Data Protection Commission over its alleged use of public posts from EU users to train the Grok AI chatbot.

The probe is centred on whether X Internet Unlimited Company, the platform’s newly renamed Irish entity, has adhered to key GDPR principles while sharing publicly accessible data, like posts and interactions, with its affiliate xAI, which develops the chatbot.

Concerns have grown over the lack of explicit user consent, especially as other tech giants such as Meta signal similar data usage plans.

A move like this is part of a wider regulatory push in the EU to hold AI developers accountable instead of allowing unchecked experimentation. Experts note that many AI firms have deployed tools under a ‘build first, ask later’ mindset, an approach at odds with Europe’s strict data laws.

Should regulators conclude that public data still requires user consent, it could force a dramatic shift in how AI models are developed, not just in Europe but around the world.

Enterprises are now treading carefully. The investigation into X is already affecting AI adoption across the continent, with legal and reputational risks weighing heavily on decision-makers.

In one case, a Nordic bank halted its AI rollout midstream after its legal team couldn’t confirm whether European data had been used without proper disclosure. Instead of pushing ahead, the project was rebuilt using fully documented, EU-based training data.

The consequences could stretch far beyond the EU. Ireland’s probe might become a global benchmark for how governments view user consent in the age of data scraping and machine learning.

Instead of enforcement being region-specific, this investigation could inspire similar actions from regulators in places like Singapore and Canada. As AI continues to evolve, companies may have no choice but to adopt more transparent practices or face a rising tide of legal scrutiny.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TheStage AI makes neural network optimisation easy

In a move set to ease one of the most stubborn hurdles in AI development, Delaware-based startup TheStage AI has secured $4.5 million to launch its Automatic NNs Analyzer (ANNA).

Instead of requiring months of manual fine-tuning, ANNA allows developers to optimise AI models in hours, cutting deployment costs by up to five times. The technology is designed to simplify a process that has remained inaccessible to all but the largest tech firms, often limited by expensive GPU infrastructure.

TheStage AI’s system automatically compresses and refines models using techniques like quantisation and pruning, adapting them to various hardware environments without locking users into proprietary platforms.

Instead of focusing on cloud-based deployment, their models, called ‘Elastic models’, can run anywhere from smartphones to on-premise GPUs. This gives startups and enterprises a cost-effective way to adjust quality and speed with a simple interface, akin to choosing video resolution on streaming platforms.

Backed by notable investors including Mehreen Malik and Atlantic Labs, and already used by companies like Recraft.ai, the startup addresses a growing need as demand shifts from AI training to real-time inference.

Unlike competitors acquired by larger corporations and tied to specific ecosystems, TheStage AI takes a dual-market approach, helping both app developers and AI researchers. Their strategy supports scale without complexity, effectively making AI optimisation available to teams of any size.

Founded by a group of PhD holders with experience at Huawei, the team combines deep academic roots with practical industry application.

By offering a tool that streamlines deployment instead of complicating it, TheStage AI hopes to enable broader use of generative AI technologies in sectors where performance and cost have long been limiting factors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia expands AI chip production in the US amid political pressure and global shifts

Nvidia is significantly ramping up its presence in the United States by commissioning over a million square feet of manufacturing space in Arizona and Texas to build and test its powerful AI chips. The tech giant has begun producing its Blackwell chips at TSMC facilities in Phoenix and is developing large-scale ‘supercomputer’ manufacturing plants in partnership with Foxconn in Houston and Wistron in Dallas.

The company projects mass production to begin within the next 12 to 15 months, with ambitions to manufacture up to half a trillion dollars’ worth of AI infrastructure in the US over the next four years. CEO Jensen Huang emphasised that this move marks the first time the core components of global AI infrastructure are being built domestically.

He cited growing global demand, supply chain resilience, and national security as key reasons for the shift. Nvidia’s decision follows an agreement with the Trump administration that helped the company avoid export restrictions on its H20 chip, a top-tier processor still eligible for export to China.

Nvidia joins a broader wave of AI industry leaders aligning with the Trump administration’s ‘America-first’ strategy. Companies like OpenAI and Microsoft have pledged massive investments in US-based AI infrastructure, hoping to secure political goodwill and avoid regulatory hurdles.

Trump has also reportedly pressured key suppliers like TSMC to expand American operations, threatening tariffs as high as 100% if they fail to comply. Despite the enthusiasm, Nvidia’s expansion faces headwinds.

A shortage of skilled workers and potential retaliation from China—particularly over raw material access—pose serious risks. Meanwhile, Trump’s recent moves to undermine the Chips Act, which provides critical funding for domestic chipmaking, have raised concerns about the long-term viability of US semiconductor investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US exempts key electronics from China import taxes

Smartphones, computers, and key tech components have been granted exemption from the latest round of US tariffs, providing relief to American technology firms heavily reliant on Chinese manufacturing.

The decision, which includes products such as semiconductors, solar cells, and memory cards, marks the first major rollback in President Donald Trump’s trade war with China.

The exemptions, retroactively effective from 5 April, come amid concerns from US tech giants that consumer prices would soar.

Analysts say this move could be a turning point, especially for companies like Apple and Nvidia, which source most of their hardware from China. Industry reaction has been overwhelmingly positive, with suggestions that the policy shift could reshape global tech supply chains.

Despite easing tariffs on electronics, Trump has maintained a strict stance on Chinese trade, citing national security and economic independence.

The White House claims the reprieve gives firms time to shift manufacturing to the US. However, electronic goods will still face a separate 20% tariff due to China’s ties to fentanyl-related trade. Meanwhile, Trump insists high tariffs are essential leverage to renegotiate fairer global trade terms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Benchmark backlash hits Meta’s Maverick model

Meta’s latest open-source language model, Llama 4 Maverick, has ranked poorly on a widely used AI benchmark after the company was criticised for initially using a heavily modified, unreleased version to boost its results.

LM Arena, the platform where the performance was measured, has since updated its rules and retested Meta’s vanilla version.

The plain Maverick model, officially named ‘Llama-4-Maverick-17B-128E-Instruct,’ placed behind older competitors such as OpenAI’s GPT-4o, Anthropic’s Claude 3.5 Sonnet, and Google’s Gemini 1.5 Pro.

Meta admitted that the stronger-performing variant used earlier had been ‘optimised for conversationality,’ which likely gave it an unfair advantage in LM Arena’s human-rated comparisons.

Although LM Arena’s reliability as a performance gauge has been questioned, the controversy has raised concerns over transparency and benchmarking practices in the AI industry.

Meta has since released its open-source model to developers, encouraging them to customise it for real-world use and provide feedback.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT hits 800 million users after viral surge

ChatGPT’s user base has doubled in recent weeks, with OpenAI CEO Sam Altman estimating up to 800 million people now use the platform weekly.

Speaking at TED 2025, Altman confirmed the surge during an on-stage conversation, acknowledging the figure after being pressed by TED curator Chris Anderson. He suggested the user growth was accelerating rapidly, describing the adoption as covering around 10% of the global population.

The platform’s popularity has soared thanks to viral new features, including a March update that introduced Ghibli mode—an image and video generator inspired by the animation style of Studio Ghibli.

Altman noted that this single feature drew in a million users within an hour of launch. When asked about artist compensation, he said OpenAI may eventually offer automatic payments to creators whose styles are used in prompts, though safeguards remain in place to avoid generating copyrighted material.

Other major updates include the rollout of a memory function that allows ChatGPT to remember user interactions indefinitely, making it a more personalised assistant over time. Altman also spoke about the development of autonomous AI agents capable of acting on users’ behalf, framed with safety guardrails.

While acknowledging fears of AI replacing human jobs, he encouraged a view of AI as a tool to unlock greater capabilities rather than a threat to livelihoods.

For more information on these topics, visit diplomacy.edu.

Creators get AI-made background music on YouTube

YouTube has introduced a new AI-powered tool that creates free background music for video creators, helping them avoid copyright issues. The feature, known as Music Assistant, was showcased on YouTube’s Creator Insider channel and is designed to match music to a video’s tone using simple text prompts.

Users can enter descriptions such as ‘uplifting and motivational music for a workout montage’, and the tool will generate several suitable tracks for review and download. Music Assistant is currently available within YouTube’s Creator Music beta section and is being rolled out gradually to those with access.

YouTube’s move follows broader industry trends, with companies like Stability AI and Meta also developing similar music-generating technologies.

The platform has already been experimenting with AI music through features like Dream Track and a music remixer for Shorts, allowing further creative flexibility for users.

For more information on these topics, visit diplomacy.edu.

Mood-based AI search tool tested by Netflix

Netflix is testing a new AI-powered search tool that could transform how users discover content on the platform.

Developed in collaboration with OpenAI, the feature goes beyond traditional search methods by allowing subscribers to use natural language queries based on mood, themes or descriptions rather than just titles or actors.

Currently, the tool is available only to a limited number of users in Australia and New Zealand using iOS devices, with opt-in access required. Netflix plans to expand the test to more regions, including the United States, in the near future.

The move highlights the streaming giant’s growing investment in AI, which it already uses for personalised recommendations.

Despite embracing AI, Netflix has stated it does not intend to replace creatives with technology. The company has publicly acknowledged concerns from the film and television industry, promising that writers, actors, and filmmakers remain central to its content creation strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT gets infinite memory upgrade

OpenAI is introducing a major upgrade to ChatGPT’s memory capabilities, allowing the AI to retain all previous conversations indefinitely. CEO Sam Altman described the development as a step toward making ChatGPT a more personalised assistant that better adapts to users over time.

Previous versions of the AI could only remember chats from the past few weeks, which helped with ongoing projects and stylistic consistency.

The new update goes much further, enabling ChatGPT to recall details from all prior conversations and use them to offer more tailored support, such as giving lifestyle advice or acting as a personal coach.

The feature will initially roll out to ChatGPT Plus and Pro subscribers, with broader availability likely in the future. However, the move has sparked some concerns around privacy, as the AI’s enhanced recall could allow anyone with account access to uncover personal details with a simple prompt.

Users may wish to take precautions, such as setting up custom instructions to limit the disclosure of sensitive information.

For more information on these topics, visit diplomacy.edu.

UAE experts warn on AI privacy risks in art apps

A surge in AI applications transforming selfies into Studio Ghibli-style artwork has captivated social media, but UAE cybersecurity experts are raising concerns over privacy and data misuse.

Dr Mohamed Al Kuwaiti, Head of Cybersecurity for the UAE Government, warned that engaging with unofficial apps could lead to breaches or leaks of personal data. He emphasised that while AI’s benefits are clear, users must understand how their personal data is handled by these platforms.

He called for strong cybersecurity standards across all digital platforms, urging individuals to be more cautious with their data.

Media professionals are also sounding alarms. Adel Al-Rashed, an Emirati journalist, cautioned that free apps often mimic trusted platforms but could exploit user data. He advised users to stick to verified applications, noting that paid services, like ChatGPT’s Pro edition, offer stronger privacy protections.

While acknowledging the risks, social media influencer Ibrahim Al-Thahli highlighted the excitement AI brings to creative expression. He urged users to focus on education and safe engagement with the technology, underscoring the UAE’s goal to build a resilient digital economy.

For more information on these topics, visit diplomacy.edu.