In a recent statement, the UN highlighted the growing field of neuro-technology, which encompasses devices and software that can measure, access, or manipulate the nervous system, as posing new risks to human rights.
The UN highlighted how such technologies could challenge fundamental concepts like ‘mental integrity’, autonomy and personal identity by enabling unprecedented access to brain data.
It warned that without robust regulation, the benefits of neuro-technology may come with costs such as privacy violations, unequal access and intrusive commercial uses.
The concerns align with broader debates about how advanced technologies, such as AI, are reshaping society, ethics, and international governance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Nvidia CEO Jensen Huang said China is ‘nanoseconds’ behind the US in AI and urged Washington to lead by accelerating innovation and courting developers globally. He argued that excluding China would weaken the reach of US technology and risk splintering the ecosystem into incompatible stacks.
Huang’s remarks came amid ongoing export controls that bar Nvidia’s most advanced processors from the Chinese market. He acknowledged national security concerns but cautioned that strict limits can slow the spread of American tools that underpin AI research, deployment, and scaling.
Hardware remains central, Huang said, citing advanced accelerators and data-centre capacity as the substrate for training frontier models. Yet diffusion matters: widespread adoption of US platforms by global developers amplifies influence, reduces fragmentation, and accelerates innovation.
With sales of top-end chips restricted, Huang warned that Chinese firms will continue to innovate on domestic alternatives, increasing the likelihood of parallel systems. He called for policies that enable US leadership while preserving channels to the developer community in China.
Huang framed the objective as keeping America ahead, maintaining the world’s reliance on an American tech stack, and avoiding strategies that would push away half the world’s AI talent.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
In a move that signals a significant shift in global AI strategy, companies such as OpenAI, Google and Perplexity AI are partnering with Indian telecoms and service providers to offer premium AI tools, for example, advanced chatbot access and large-model features, free for millions of users in India.
The offers are not merely promotional but part of a long-term play to dominate the AI ecosystem.
Market analysts quoted by the BBC note that the objective is to ‘get Indians hooked on to generative AI before asking them to pay for it’. The size of India’s digital ecosystem, with its young, mobile-first population and relatively less restrictive regulation, makes it a key battleground for AI firms aiming for global scale.
However, there are risks: free access may raise concerns around privacy and data protection, algorithmic control and whether users are fully informed about how their data is used and when free offers will convert into paid services.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Salesforce has signed a definitive agreement to acquire Spindle AI, a company specialising in agentic analytics and machine learning. The deal aims to strengthen Salesforce’s Agentforce platform by integrating Spindle’s advanced data modelling and forecasting technologies.
Spindle AI has developed neuro-symbolic AI agents capable of autonomously generating and optimising scenario models. Its analytics tools enable businesses to simulate and assess complex decisions, from pricing strategies to go-to-market plans, using AI-driven insights.
Salesforce said the acquisition will enhance its focus on Agent Observability and Self-Improvement within Agentforce 360. Executives described Spindle AI’s expertise as critical to building more transparent and reliable agentic systems capable of explaining and refining their own reasoning.
The acquisition, subject to customary closing conditions, is expected to be completed in Salesforce’s fourth fiscal quarter of 2026. Once finalised, Spindle AI will join Agentforce to expand AI-powered analytics, continuous optimisation, and ROI forecasting for enterprise customers worldwide.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Sweden’s data protection authority, IMY, has opened an investigation into a massive ransomware-related data breach that exposed personal information belonging to 1.5 million people. The breach originated from a cyberattack on IT provider Miljödata in August, which affected roughly 200 municipalities.
Hackers reportedly stole highly sensitive data, including names, medical certificates, and rehabilitation records, much of which has since been leaked on the dark web. Swedish officials have condemned the incident, calling it one of the country’s most serious cyberattacks in recent years.
The IMY said the investigation will examine Miljödata’s data protection measures and the response of several affected public bodies, such as Gothenburg, Älmhult, and Västmanland. The regulator’s goal is to identify security shortcomings for future cyber threats.
Authorities have yet to confirm how the attackers gained access to Miljödata’s systems, and no completion date for the investigation has been announced. The breach has reignited calls for tighter cybersecurity standards across Sweden’s public sector.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Hackers are experimenting with malware that taps large language models to morph in real time, according to Google’s Threat Intelligence Group. An experimental family dubbed PROMPTFLUX can rewrite and obfuscate its own code as it executes, aiming to sidestep static, signature-based detection.
PROMPTFLUX interacts with Gemini’s API to request on-demand functions and ‘just-in-time’ evasion techniques, rather than hard-coding behaviours. GTIG describes the approach as a step toward more adaptive, partially autonomous malware that dynamically generates scripts and changes its footprint.
Investigators say the current samples appear to be in development or testing, with incomplete features and limited Gemini API access. Google says it has disabled associated assets and has not observed a successful compromise, yet warns that financially motivated actors are exploring such tooling.
Researchers point to a maturing underground market for illicit AI utilities that lowers barriers for less-skilled offenders. State-linked operators in North Korea, Iran, and China are reportedly experimenting with AI to enhance reconnaissance, influence, and intrusion workflows.
Defenders are turning to AI, using security frameworks and agents like ‘Big Sleep’ to find flaws. Teams should expect AI-assisted obfuscation, emphasise behaviour-based detection, watch model-API abuse, and lock down developer and automation credentials.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A High Court judge warned that a solicitor who pushed an expert to accept an AI-generated draft breached their duty. Mr Justice Waksman called it a gross breach and cited a case from the latest survey. He noted 14% of experts would accept such terms, which is unacceptable.
Updated guidance clarifies what limited judicial AI use is permissible. Judges may use a private ChatGPT 365 for summaries with confidential prompts. There is no duty to disclose, but the judgment must be the judge’s own.
Waksman cautioned against legal research or analysis done by AI. Hallucinated authorities and fake citations have already appeared. Experts must not let AI answer the questions they are retained to decide.
Survey findings show wider use of AI for drafting and summaries. Waksman drew a bright line between back-office aids and core duties. Convenience cannot trump independence, accuracy and accountability.
For practitioners, two rules follow. Solicitors must not foist AI-drafted expert opinions, and experts should refuse. Within courts, limited, non-determinative AI may assist, but outcomes must be human.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Oracle Health and Life Sciences has announced a strategic collaboration with the Cancer Center Informatics Society (Ci4CC) to accelerate AI innovation in oncology. The partnership unites Oracle’s healthcare technology with Ci4CC’s national network of cancer research institutions.
The two organisations plan to co-develop an electronic health record system tailored to oncology, integrating clinical and genomic data for more effective personalised medicine. They also aim to explore AI-driven drug development to enhance research and patient outcomes.
Oracle executives said the collaboration represents an opportunity to use advanced AI applications to transform cancer research. The Ci4CC President highlighted the importance of collective innovation, noting that progress in oncology relies on shared data and cross-institution collaboration.
The agreement, announced at Ci4CC’s annual symposium in Miami Beach US, remains non-binding but signals growing momentum in AI-driven precision medicine. Both organisations see the initiative as a step towards turning medical data into actionable insights that could redefine oncology care.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Large language models (LLMs) are increasingly used to grade, hire, and moderate text. UZH research shows that evaluations shift when participants are told who wrote identical text, revealing source bias. Agreement stayed high only when authorship was hidden.
When told a human or another AI wrote it, agreement fell, and biases surfaced. The strongest was anti-Chinese across all models, including a model from China, with sharp drops even for well-reasoned arguments.
AI models also preferred ‘human-written’ over ‘AI-written’, showing scepticism toward machine-authored text. Such identity-triggered bias risks unfair outcomes in moderation, reviewing, hiring, and newsroom workflows.
Researchers recommend identity-blind prompts, A/B checks with and without source cues, structured rubrics focused on evidence and logic, and human oversight for consequential decisions.
They call for governance standards: disclose evaluation settings, test for bias across demographics and nationalities, and set guardrails before sensitive deployments. Transparency on prompts, model versions, and calibration is essential.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The National and Kapodistrian University of Athens has announced a new partnership with Google to enhance university-level education in AI. The collaboration grants all students free 12-month access to Google’s AI Pro programme, a suite of advanced learning and research tools.
Through the initiative, students can use Gemini 2.5 Pro, Google’s latest AI model, along with Deep Research and NotebookLM for academic exploration and study organisation. The offer also includes 2 TB of cloud storage and access to Veo 3 for video creation and Jules for coding support.
The programme aims to expand digital literacy and increase hands-on engagement with generative and research-driven AI tools. By integrating these technologies into everyday study, the university hopes to cultivate a new generation of AI-experienced graduates.
University officials view the collaboration as a milestone in Greek AI-driven education, following recent national initiatives to introduce AI programmes in schools and healthcare. The partnership marks a significant step in aligning higher education with the global digital economy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!