UN calls for safeguards around emerging neuro-technologies

In a recent statement, the UN highlighted the growing field of neuro-technology, which encompasses devices and software that can measure, access, or manipulate the nervous system, as posing new risks to human rights.

The UN highlighted how such technologies could challenge fundamental concepts like ‘mental integrity’, autonomy and personal identity by enabling unprecedented access to brain data.

It warned that without robust regulation, the benefits of neuro-technology may come with costs such as privacy violations, unequal access and intrusive commercial uses.

The concerns align with broader debates about how advanced technologies, such as AI, are reshaping society, ethics, and international governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Winning the AI race means winning developers in China, says Huang of Nvidia

Nvidia CEO Jensen Huang said China is ‘nanoseconds’ behind the US in AI and urged Washington to lead by accelerating innovation and courting developers globally. He argued that excluding China would weaken the reach of US technology and risk splintering the ecosystem into incompatible stacks.

Huang’s remarks came amid ongoing export controls that bar Nvidia’s most advanced processors from the Chinese market. He acknowledged national security concerns but cautioned that strict limits can slow the spread of American tools that underpin AI research, deployment, and scaling.

Hardware remains central, Huang said, citing advanced accelerators and data-centre capacity as the substrate for training frontier models. Yet diffusion matters: widespread adoption of US platforms by global developers amplifies influence, reduces fragmentation, and accelerates innovation.

With sales of top-end chips restricted, Huang warned that Chinese firms will continue to innovate on domestic alternatives, increasing the likelihood of parallel systems. He called for policies that enable US leadership while preserving channels to the developer community in China.

Huang framed the objective as keeping America ahead, maintaining the world’s reliance on an American tech stack, and avoiding strategies that would push away half the world’s AI talent.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google flags adaptive malware that rewrites itself with AI

Hackers are experimenting with malware that taps large language models to morph in real time, according to Google’s Threat Intelligence Group. An experimental family dubbed PROMPTFLUX can rewrite and obfuscate its own code as it executes, aiming to sidestep static, signature-based detection.

PROMPTFLUX interacts with Gemini’s API to request on-demand functions and ‘just-in-time’ evasion techniques, rather than hard-coding behaviours. GTIG describes the approach as a step toward more adaptive, partially autonomous malware that dynamically generates scripts and changes its footprint.

Investigators say the current samples appear to be in development or testing, with incomplete features and limited Gemini API access. Google says it has disabled associated assets and has not observed a successful compromise, yet warns that financially motivated actors are exploring such tooling.

Researchers point to a maturing underground market for illicit AI utilities that lowers barriers for less-skilled offenders. State-linked operators in North Korea, Iran, and China are reportedly experimenting with AI to enhance reconnaissance, influence, and intrusion workflows.

Defenders are turning to AI, using security frameworks and agents like ‘Big Sleep’ to find flaws. Teams should expect AI-assisted obfuscation, emphasise behaviour-based detection, watch model-API abuse, and lock down developer and automation credentials.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Courts signal limits on AI in legal proceedings

A High Court judge warned that a solicitor who pushed an expert to accept an AI-generated draft breached their duty. Mr Justice Waksman called it a gross breach and cited a case from the latest survey.
He noted 14% of experts would accept such terms, which is unacceptable.

Updated guidance clarifies what limited judicial AI use is permissible. Judges may use a private ChatGPT 365 for summaries with confidential prompts. There is no duty to disclose, but the judgment must be the judge’s own.

Waksman cautioned against legal research or analysis done by AI. Hallucinated authorities and fake citations have already appeared. Experts must not let AI answer the questions they are retained to decide.

Survey findings show wider use of AI for drafting and summaries. Waksman drew a bright line between back-office aids and core duties. Convenience cannot trump independence, accuracy and accountability.

For practitioners, two rules follow. Solicitors must not foist AI-drafted expert opinions, and experts should refuse. Within courts, limited, non-determinative AI may assist, but outcomes must be human.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Central Bank warns of new financial scams in Ireland

The Central Bank of Ireland has launched a new campaign to alert consumers to increasingly sophisticated scams targeting financial services users. Officials warned that scammers are adapting, making caution essential with online offers and investments.

Scammers are now using tactics such as fake comparison websites that appear legitimate but collect personal information for fraudulent products or services. Fraud recovery schemes are also common, promising to recover lost funds for an upfront fee, which often leads to further financial loss.

Advanced techniques include AI-generated social media profiles and ads, or ‘deepfakes’, impersonating public figures to promote fake investment platforms.

Deputy Governor Colm Kincaid warned that scams now offer slightly above-market returns, making them harder to spot. Consumers are encouraged to verify information, use regulated service providers, and seek regulated advice before making financial decisions.

The Central Bank advises using trusted comparison sites, checking ads and investment platforms, ignoring unsolicited recovery offers, and following the SAFE test: Stop, Assess, Factcheck, Expose. Reporting suspected scams to the Central Bank or An Garda Síochána remains crucial to protecting personal finances.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Denmark’s new chat control plan raises fresh privacy concerns

Denmark has proposed an updated version of the EU’s controversial ‘chat control’ regulation, shifting from mandatory to voluntary scanning of private messages. Former MEP Patrick Breyer has warned, however, that the revision still threatens Europeans’ right to private communication.

Under the new plan, messaging providers could choose to scan chats for illegal material, but without a clear requirement for court orders. Breyer argued that this sidesteps the European Parliament’s position, which insists on judicial authorisation before any access to communications.

He also criticised the proposal for banning under-16s from using messaging apps like WhatsApp and Telegram, claiming such restrictions would prove ineffective and easily bypassed. In addition, the plan would effectively outlaw anonymous communication, requiring users to verify their identities through IDs.

Privacy advocates say the Danish proposal could set a dangerous precedent by eroding fundamental digital rights. Civil society groups have urged EU lawmakers to reject measures that compromise secure, anonymous communication essential for journalists and whistleblowers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNDP and Algorand launch blockchain training for 24,000 staff

The United Nations Development Programme (UNDP) has officially expanded its Blockchain Academy to reach 24,000 personnel worldwide, including staff from UNDP, UN Volunteers, and the United Nations Capital Development Fund (UNCDF).

The initiative, launched in partnership with the Algorand Foundation, aims to strengthen understanding and practical use of blockchain technology to support sustainable development goals.

The academy’s expanded curriculum builds on a successful beta phase that certified over 30 UN personnel and introduced 18 hours of specialised training. It now offers advanced modules to help UN staff design transparent and efficient blockchain solutions for real-world challenges.

The training also fosters a collaborative network where participants share best practices and develop blockchain-driven projects across global programmes.

UNDP has used blockchain since 2015 to boost transparency and inclusion, from tracking supply chains to supporting energy trading and digital investments. Through its Algorand partnership, UNDP aims to speed up blockchain adoption by offering technical support and project incubation for scalable sustainable impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic strengthens European growth through Paris and Munich offices

AI firm Anthropic is expanding its European presence by opening new offices in Paris and Munich, strengthening its footprint alongside existing hubs in London, Dublin, and Zurich.

An expansion that follows rapid growth across the EMEA region, where the company has tripled its workforce and seen a ninefold increase in annual run-rate revenue.

The move comes as European businesses increasingly rely on Claude for critical enterprise tasks. Companies such as L’Oréal, BMW, SAP, and Sanofi are using the AI model to enhance software, improve workflows, and ensure operational reliability.

Germany and France, both among the top 20 countries in Claude usage per capita, are now at the centre to Anthropic’s strategic expansion.

Anthropic is also strengthening its leadership team across Europe. Guillaume Princen will oversee startups and digital-native businesses, while Pip White and Thomas Remy will lead the northern and southern EMEA regions, respectively.

A new head will soon be announced for Central and Eastern Europe, reflecting the company’s growing regional reach.

Beyond commercial goals, Anthropic is partnering with European institutions to promote AI education and culture. It collaborates with the Light Art Space in Berlin, supports student hackathons through TUM.ai, and works with the French organisation Unaite to advance developer training.

These partnerships reinforce Anthropic’s long-term commitment to responsible AI growth across the continent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta invests $600 billion to expand AI data centres across the US

A $600 billion investment aimed at boosting innovation, job creation, and sustainability is being launched in the US by Meta to expand its AI infrastructure.

Instead of outsourcing development, the company is building its new generation of AI data centres domestically, reinforcing America’s leadership in technology and supporting local economies.

Since 2010, Meta’s data centre projects have supported more than 30,000 skilled trade jobs and 5,000 operational roles, generating $20 billion in business for US subcontractors. These facilities are designed to power Meta’s AI ambitions while driving regional economic growth.

The company emphasises responsible development by investing heavily in renewable energy and water efficiency. Its projects have added 15 gigawatts of new energy to US power grids, upgraded local infrastructure, and helped restore water systems in surrounding communities.

Meta aims to become fully water positive by 2030.

Beyond infrastructure, Meta has channelled $58 million into community grants for schools, nonprofits, and local initiatives, including STEM education and veteran training programmes.

As AI grows increasingly central to digital progress, Meta’s continued investment in sustainable, community-focused data centres underscores its vision for a connected, intelligent future built within the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Inside OpenAI’s battle to protect AI from prompt injection attacks

OpenAI has identified prompt injection as one of the most pressing new challenges in AI security. As AI systems gain the ability to browse the web, handle personal data and act on users’ behalf, they become targets for malicious instructions hidden within online content.

These attacks, known as prompt injections, can trick AI models into taking unintended actions or revealing sensitive information.

To counter the issue, OpenAI has adopted a multi-layered defence strategy that combines safety training, automated monitoring and system-level security protections. The company’s research into ‘Instruction Hierarchy’ aims to help models distinguish between trusted and untrusted commands.

Continuous red-teaming and automated detection systems further strengthen resilience against evolving threats.

OpenAI also provides users with greater control, featuring built-in safeguards such as approval prompts before sensitive actions, sandboxing for code execution, and ‘Watch Mode’ when operating on financial or confidential sites.

These measures ensure that users remain aware of what actions AI agents perform on their behalf.

While prompt injection remains a developing risk, OpenAI expects adversaries to devote significant resources to exploiting it. The company continues to invest in research and transparency, aiming to make AI systems as secure and trustworthy as a cautious, well-informed human colleague.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!