Elon Musk’s X tightens control on AI data use

Social media platform X has updated its developer agreement to prohibit the use of its content for training large language models.

The new clause, added under the restrictions section, forbids any attempt to use X’s API or content to fine-tune or train foundational or frontier AI models.

The move follows Elon Musk’s acquisition of X through his AI company xAI, which is developing its own models.

By restricting external access, the company aims to prevent competitors from freely using X’s data while maintaining control over a valuable resource for training AI systems.

X joins a growing list of platforms, including Reddit and The Browser Company, that have introduced terms blocking unauthorised AI training.

The shift reflects a broader industry trend towards limiting open data access amid the rising value of proprietary content in the AI arms race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FBI warns BADBOX 2.0 malware is infecting millions

The FBI has issued a warning about the resurgence of BADBOX 2.0, a dangerous form of malware infecting millions of consumer electronics globally.

Often preloaded onto low-cost smart TVs, streaming boxes, and IoT devices, primarily from China, the malware grants cyber criminals backdoor access, enabling theft, surveillance, and fraud while remaining essentially undetectable.

BADBOX 2.0 forms part of a massive botnet and can also infect devices through malicious apps and drive-by downloads, especially from unofficial Android stores.

Once activated, the malware enables a range of attacks, including click fraud, fake account creation, DDoS attacks, and the theft of one-time passwords and personal data.

Removing the malware is extremely difficult, as it typically requires flashing new firmware, an option unavailable for most of the affected devices.

Users are urged to check their hardware against a published list of compromised models and to avoid sideloading apps or purchasing unverified connected tech.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google is testing voice-powered Search Live in AI Mode

Google is rolling out a new voice-powered feature called Search Live as part of its evolving AI Mode for Search. Initially previewed at Google I/O 2025, the feature allows users to interact with Search through real-time spoken conversations without needing to type or tap through results.

Available to select users in the United States via the Google app on Android and iOS, Search Live lets users ask questions aloud and receive voice responses. It also supports conversational follow-ups, creating a more natural flow of information.

The feature is powered by Project Astra, Google’s real-time speech processing engine that underpins other innovations like Live in Gemini.

When enabled, a sparkle-styled waveform icon appears under the search bar, replacing the previous Google Lens shortcut. Tapping it opens the feature and activates four voice style options—Cosmo, Neso, Terra, and Cassini. Users can opt for audio responses or mute them and view a transcript instead.

Search Live marks a broader shift in how Google is rethinking search: turning static queries into dynamic dialogues. The company also plans to expand AI Mode with support for live camera feeds shortly, aiming to make Search more immersive and interactive.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Schools in the EU start adapting to the AI Act

European schools are taking their first concrete steps to integrate AI in line with the EU AI Act, with educators and experts urging a measured, strategic approach to compliance.

At a recent conference on AI in education, school leaders and policymakers explored how to align AI adoption with the incoming regulations.

With key provisions of the EU AI Act already in effect and full enforcement coming by August 2026, the pressure is on schools to ensure their use of AI is transparent, fair, and accountable. The law classifies AI tools by risk level, with those used to evaluate or monitor students subject to stricter oversight.

Matthew Wemyss, author of ‘AI in Education: An EU AI Act Guide,’ laid out a framework for compliance: assess current AI use, scrutinise the impact on students, and demand clear documentation from vendors.

Wemyss stressed that schools remain responsible as deployers, even when using third-party tools, and should appoint governance leads who understand both technical and ethical aspects.

Education consultant Philippa Wraithmell warned schools not to confuse action with strategy. She advocated starting small, prioritising staff confidence, and ensuring every tool aligns with learning goals, data safety, and teacher readiness.

Al Kingsley MBE emphasised the role of strong governance structures and parental transparency, urging school boards to improve their digital literacy to lead effectively.

The conference highlighted a unifying theme: meaningful AI integration in schools requires intentional leadership, community involvement, and long-term planning. With the right mindset, schools can use AI not just to automate, but to enhance learning outcomes responsibly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT adds meeting recording and cloud access

OpenAI has launched new features for ChatGPT that allow it to record meetings, transcribe conversations, and pull information directly from cloud platforms like Google Drive and SharePoint.

Instead of relying on typed input alone, users can now speak to ChatGPT, which records audio, creates editable summaries, and helps generate follow-up content such as emails or project outlines.

‘Record’ is currently available to Team users via the macOS app and will soon expand to Enterprise and Edu accounts.

The recording tool automatically deletes the audio after transcription and applies existing workspace data rules, ensuring recordings are not used for training.

Instead of leaving notes scattered across different platforms, users gain a structured and searchable history of conversations, voice notes, or brainstorming sessions, which ChatGPT can recall and apply during future interactions.

At the same time, OpenAI has introduced new connectors for business users that let ChatGPT access files from cloud services like Dropbox, OneDrive, Box, and others.

These connectors allow ChatGPT to search and summarise information from internal documents, rather than depending only on web search or user uploads. The update also includes beta support for Deep Research agents that can work with tools like GitHub and HubSpot.

OpenAI has embraced the Model Context Protocol, an open standard allowing organisations to build their own custom connectors for proprietary tools.

Rather than serving purely as a general-purpose chatbot, ChatGPT is evolving into a workplace assistant capable of tapping into and understanding a company’s complete knowledge base.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

M&S CEO targeted by hackers in abusive ransom email

Marks & Spencer has been directly targeted by a ransomware group calling itself DragonForce, which sent a vulgar and abusive ransom email to CEO Stuart Machin using a compromised employee email address.

The message, laced with offensive language and racist terms, demanded that Machin engage via a darknet portal to negotiate payment. It also claimed that the hackers had encrypted the company’s servers and stolen customer data, a claim M&S eventually acknowledged weeks later.

The email, dated 23 April, appears to have been sent from the account of an Indian IT worker employed by Tata Consultancy Services (TCS), a long-standing M&S tech partner.

TCS has denied involvement and stated that its systems were not the source of the breach. M&S has remained silent publicly, neither confirming the full scope of the attack nor disclosing whether a ransom was paid.

The cyber attack has caused major disruption, costing M&S an estimated £300 million and halting online orders for over six weeks.

DragonForce has also claimed responsibility for a simultaneous attack on the Co-op, which left some shelves empty for days. While nothing has yet appeared on DragonForce’s leak site, the group claims it will publish stolen information soon.

Investigators believe DragonForce operates as a ransomware-as-a-service collective, offering tools and platforms to cybercriminals in exchange for a 20% share of any ransom.

Some experts suspect the real perpetrators may be young hackers from the West, linked to a loosely organised online community called Scattered Spider. The UK’s National Crime Agency has confirmed it is focusing on the group as part of its inquiry into the recent retail hacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini 2.5 Pro tops AI coding tests, surpasses ChatGPT and Claude

Google has released an updated version of its Gemini 2.5 Pro model, addressing issues found in earlier updates.

Unlike the I/O Edition, which focuses mostly on coding, the new version improves performance more broadly and is expected to become a stable release in both the Gemini app and web interface.

The company claims the updated model performs significantly better in code generation, topping the Aider Polyglot test with a score of 82.2 percent—surpassing offerings from OpenAI, Anthropic and DeepSeek.

Beyond coding, the model aims to close previous performance gaps introduced with the March 25th update, especially in creativity and response formatting.

Developers can now fine-tune the model’s ‘thinking budget’, while users should notice a more transparent output structure. These changes and consistent improvement in leaderboard ratings on LMArena and WebDevArena suggest that Google is extending its lead in the AI race.

Google continues to rely on blind testing to judge how people feel about its models, and the new Gemini Pro seems to resonate well. In fact, it now answers even quirky test questions with more clarity and confidence—something that had been lacking in earlier versions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

G42 and Mistral team up to build AI platforms

Abu Dhabi-based tech company G42 has partnered with French startup Mistral AI to co-develop advanced AI platforms and infrastructure across Europe, the Middle East, and the Global South.

The collaboration aims to span the full AI value chain, from model training to sector-specific applications, combining Mistral’s open-weight language models with G42’s infrastructure expertise.

The deal builds on prior AI cooperation agreements endorsed by UAE President Sheikh Mohamed bin Zayed and French President Emmanuel Macron, reinforcing both countries’ shared ambition to lead in AI innovation.

G42 subsidiaries Core42 and Inception will support the initiative by contributing technical development and deployment capabilities.

This partnership is part of a broader UAE strategy to position itself as a global AI hub and diversify its economy beyond oil. With AI expected to add up to $91 billion to the UAE’s economy by 2030, such international alliances reflect a shift in AI power centres toward the Middle East.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit accuses Anthropic of misusing user content

Reddit has taken legal action against AI startup Anthropic, alleging that the company scraped its platform without permission and used the data to train and commercialise its Claude AI models.

The lawsuit, filed in San Francisco’s Superior Court, accuses Anthropic of breaching contract terms, unjust enrichment, and interfering with Reddit’s operations.

According to Reddit, Anthropic accessed the platform more than 100,000 times despite publicly claiming to have stopped doing so.

The complaint claims Anthropic ignored Reddit’s technical safeguards, such as robots.txt files, and bypassed the platform’s user agreement to extract large volumes of user-generated content.

Reddit argues that Anthropic’s actions undermine its licensing deals with companies like OpenAI and Google, who have agreed to strict content usage and deletion protocols.

The filing asserts that Anthropic intentionally used personal data from Reddit without ever seeking user consent, calling the company’s conduct deceptive. Despite public statements suggesting respect for privacy and web-scraping limitations, Anthropic is portrayed as having disregarded both.

The lawsuit even cites Anthropic’s own 2021 research that acknowledged Reddit content as useful in training AI models.

Reddit is now seeking damages, repayment of profits, and a court order to stop Anthropic from using its data further. The market responded positively, with Reddit’s shares closing nearly 67% higher at $118.21—indicating investor support for the company’s aggressive stance on data protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and India plan AI infrastructure push

OpenAI is in discussions with the Indian government to collaborate on data centre infrastructure as part of its new global initiative, ‘OpenAI for Countries’.

The programme aims to help partner nations expand AI capabilities through joint investment and strategic coordination with the US. India could become one of the ten initial countries in the effort, although specific terms remain under wraps.

During a visit to Delhi, OpenAI’s chief strategy officer Jason Kwon emphasised India’s potential, citing the government’s clear focus on infrastructure and AI talent.

Similar to the UAE’s recently announced Stargate project in Abu Dhabi, India may host large-scale AI computing infrastructure while also investing in the US under the same framework.

To nurture AI skills, OpenAI and the Ministry of Electronics and IT’s IndiaAI Mission launched the ‘OpenAI Academy’. It marks OpenAI’s first international rollout of its educational platform.

The partnership will provide free access to AI tools, developer training, and events, with content in English, Hindi, and four additional regional languages. It will also support government officials and startups through dedicated learning platforms.

The collaboration includes hackathons, workshops in six cities, and up to $100,000 in API credits for selected IndiaAI fellows and startups. The aim is to accelerate innovation and help Indian developers and researchers scale AI solutions more efficiently, according to IT Minister Ashwini Vaishnaw.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!