China proposes rare earth export relief for EU

China has proposed creating a ‘green channel’ for rare earth exports to the EU, aiming to ease the impact of its recent restrictions. These materials, vital to electric vehicles and household appliances, have been under stricter export controls since April.

During recent talks, Trade Commissioner Maroš Šefčovič warned Chinese officials that the curbs had caused major disruptions across Europe, describing the situation as alarming. While some progress in licence approvals has been noted, businesses argue it remains inadequate.

The talks come as both sides prepare for a high-stakes EU-China summit and continue negotiations over tariffs on Chinese electric vehicles.

Brussels has imposed duties of up to 35.3%, citing unfair subsidies, while Beijing is pushing for a deal involving minimum pricing to avoid the tariffs.

China’s commerce ministry confirmed the discussions are in their final stage but acknowledged that more work is needed to reach a resolution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE AI megaproject faces US chip export concerns

Plans for a vast AI data hub in the UAE have raised security concerns in Washington due to the country’s close ties with China.

The $100 billion Stargate UAE campus, aims to deploy advanced US chips, but US officials are scrutinising potential technology leakage risks.

Although the Trump administration supports the project, bipartisan fears remain about whether the UAE can safeguard US-developed AI and chips from foreign adversaries.

A final agreement has not been reached as both sides negotiate export conditions, with possible restrictions on Nvidia’s hardware.

The initial phase of the Stargate project will activate 200 megawatts of capacity by 2026, but the deal’s future may depend on the UAE’s willingness to accept strict US oversight.

Talks over potential amendments continue, delaying approval of what could become a $500 billion venture.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK judges issue warning on unchecked AI use by lawyers

A senior UK judge has warned that lawyers may face prosecution if they continue citing fake legal cases generated by AI without verifying their accuracy.

High Court justice Victoria Sharp called the misuse of AI a threat to justice and public trust, after lawyers in two recent cases relied on false material created by generative tools.

In one £90 million lawsuit involving Qatar National Bank, a lawyer submitted 18 cases that did not exist. The client later admitted to supplying the false information, but Justice Sharp criticised the lawyer for depending on the client’s research instead of conducting proper legal checks.

In another case, five fabricated cases were used in a housing claim against the London Borough of Haringey. The barrister denied using AI but failed to provide a clear explanation.

Both incidents have been referred to professional regulators. Sharp warned that submitting false information could amount to contempt of court or, in severe cases, perverting the course of justice — an offence that can lead to life imprisonment.

While recognising AI as a useful legal tool, Sharp stressed the need for oversight and regulation. She said AI’s risks must be managed with professional discipline if public confidence in the legal system is to be preserved.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK teams with tech giants on AI training

The UK government is launching a nationwide AI skills initiative aimed at both workers and schoolchildren, with Prime Minister Keir Starmer announcing partnerships with major tech companies including Google, Microsoft and Amazon.

The £187 million TechFirst programme will provide AI education to one million secondary students and train 7.5 million workers over the next five years.

Rather than keeping such tools limited to specialists, the government plans to make AI training accessible across classrooms and businesses. Companies involved will make learning materials freely available to boost digital skills and productivity, particularly in using chatbots and large language models.

Starmer said the scheme is designed to empower the next generation to shape AI’s future instead of being shaped by it. He called it the start of a new era of opportunity and growth, as the UK aims to strengthen its global leadership in AI.

The initiative arrives as the country’s AI sector, currently worth £72 billion, is projected to grow to more than £800 billion by 2035.

The government also signed two agreements with NVIDIA to support a nationwide AI talent pipeline, reinforcing efforts to expand both the workforce and innovation in the sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia and FCA open AI sandbox for UK fintechs

Financial firms across the UK will soon be able to experiment with AI in a new regulatory sandbox, launched by the Financial Conduct Authority (FCA) in partnership with Nvidia.

Known as the Supercharged Sandbox, it offers a secure testing ground for firms wanting to explore AI tools without needing their advanced computing resources.

Set to begin in October, the initiative is open to any financial services company testing AI-driven ideas. Firms will have access to Nvidia’s accelerated computing platform and tailored AI software, helping them work with complex data, improve automation, and enhance risk management in a controlled setting.

The FCA said the sandbox is designed to support firms lacking the in-house capacity to test new technology.

It aims to provide not only computing power but also regulatory guidance and access to better datasets, creating an environment where innovation can flourish while remaining compliant with rules.

The move forms part of a wider push by the UK government to foster economic growth through innovation. Finance minister Rachel Reeves has urged regulators to clear away obstacles to growth and praised the FCA and Bank of England for acting on her call to cut red tape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk’s X tightens control on AI data use

Social media platform X has updated its developer agreement to prohibit the use of its content for training large language models.

The new clause, added under the restrictions section, forbids any attempt to use X’s API or content to fine-tune or train foundational or frontier AI models.

The move follows Elon Musk’s acquisition of X through his AI company xAI, which is developing its own models.

By restricting external access, the company aims to prevent competitors from freely using X’s data while maintaining control over a valuable resource for training AI systems.

X joins a growing list of platforms, including Reddit and The Browser Company, that have introduced terms blocking unauthorised AI training.

The shift reflects a broader industry trend towards limiting open data access amid the rising value of proprietary content in the AI arms race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FBI warns BADBOX 2.0 malware is infecting millions

The FBI has issued a warning about the resurgence of BADBOX 2.0, a dangerous form of malware infecting millions of consumer electronics globally.

Often preloaded onto low-cost smart TVs, streaming boxes, and IoT devices, primarily from China, the malware grants cyber criminals backdoor access, enabling theft, surveillance, and fraud while remaining essentially undetectable.

BADBOX 2.0 forms part of a massive botnet and can also infect devices through malicious apps and drive-by downloads, especially from unofficial Android stores.

Once activated, the malware enables a range of attacks, including click fraud, fake account creation, DDoS attacks, and the theft of one-time passwords and personal data.

Removing the malware is extremely difficult, as it typically requires flashing new firmware, an option unavailable for most of the affected devices.

Users are urged to check their hardware against a published list of compromised models and to avoid sideloading apps or purchasing unverified connected tech.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google is testing voice-powered Search Live in AI Mode

Google is rolling out a new voice-powered feature called Search Live as part of its evolving AI Mode for Search. Initially previewed at Google I/O 2025, the feature allows users to interact with Search through real-time spoken conversations without needing to type or tap through results.

Available to select users in the United States via the Google app on Android and iOS, Search Live lets users ask questions aloud and receive voice responses. It also supports conversational follow-ups, creating a more natural flow of information.

The feature is powered by Project Astra, Google’s real-time speech processing engine that underpins other innovations like Live in Gemini.

When enabled, a sparkle-styled waveform icon appears under the search bar, replacing the previous Google Lens shortcut. Tapping it opens the feature and activates four voice style options—Cosmo, Neso, Terra, and Cassini. Users can opt for audio responses or mute them and view a transcript instead.

Search Live marks a broader shift in how Google is rethinking search: turning static queries into dynamic dialogues. The company also plans to expand AI Mode with support for live camera feeds shortly, aiming to make Search more immersive and interactive.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Schools in the EU start adapting to the AI Act

European schools are taking their first concrete steps to integrate AI in line with the EU AI Act, with educators and experts urging a measured, strategic approach to compliance.

At a recent conference on AI in education, school leaders and policymakers explored how to align AI adoption with the incoming regulations.

With key provisions of the EU AI Act already in effect and full enforcement coming by August 2026, the pressure is on schools to ensure their use of AI is transparent, fair, and accountable. The law classifies AI tools by risk level, with those used to evaluate or monitor students subject to stricter oversight.

Matthew Wemyss, author of ‘AI in Education: An EU AI Act Guide,’ laid out a framework for compliance: assess current AI use, scrutinise the impact on students, and demand clear documentation from vendors.

Wemyss stressed that schools remain responsible as deployers, even when using third-party tools, and should appoint governance leads who understand both technical and ethical aspects.

Education consultant Philippa Wraithmell warned schools not to confuse action with strategy. She advocated starting small, prioritising staff confidence, and ensuring every tool aligns with learning goals, data safety, and teacher readiness.

Al Kingsley MBE emphasised the role of strong governance structures and parental transparency, urging school boards to improve their digital literacy to lead effectively.

The conference highlighted a unifying theme: meaningful AI integration in schools requires intentional leadership, community involvement, and long-term planning. With the right mindset, schools can use AI not just to automate, but to enhance learning outcomes responsibly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT adds meeting recording and cloud access

OpenAI has launched new features for ChatGPT that allow it to record meetings, transcribe conversations, and pull information directly from cloud platforms like Google Drive and SharePoint.

Instead of relying on typed input alone, users can now speak to ChatGPT, which records audio, creates editable summaries, and helps generate follow-up content such as emails or project outlines.

‘Record’ is currently available to Team users via the macOS app and will soon expand to Enterprise and Edu accounts.

The recording tool automatically deletes the audio after transcription and applies existing workspace data rules, ensuring recordings are not used for training.

Instead of leaving notes scattered across different platforms, users gain a structured and searchable history of conversations, voice notes, or brainstorming sessions, which ChatGPT can recall and apply during future interactions.

At the same time, OpenAI has introduced new connectors for business users that let ChatGPT access files from cloud services like Dropbox, OneDrive, Box, and others.

These connectors allow ChatGPT to search and summarise information from internal documents, rather than depending only on web search or user uploads. The update also includes beta support for Deep Research agents that can work with tools like GitHub and HubSpot.

OpenAI has embraced the Model Context Protocol, an open standard allowing organisations to build their own custom connectors for proprietary tools.

Rather than serving purely as a general-purpose chatbot, ChatGPT is evolving into a workplace assistant capable of tapping into and understanding a company’s complete knowledge base.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!