Phishing messages target IndiaAI and Impact Summit 2026 participants

IndiaAI has issued an urgent advisory warning of a phishing campaign targeting attendees of the India AI Impact Summit 2026. Fraudulent SMS and WhatsApp messages claim refunds are pending and request sensitive financial details.

Organisers said the messages are not official and have not been authorised. Recipients are being urged to click links and provide full card numbers, WhatsApp numbers, and other contact information to ‘process’ refunds.

IndiaAI advised participants not to click suspicious links or share personal or banking information with unverified sources. Attendees in India are encouraged to delete such messages immediately and block the sender’s number.

Anyone who may have submitted details through a suspicious link should contact their bank without delay to secure their accounts. Organisers stressed that event-related communication will only be shared through official channels.

The advisory was issued under the AI Impact Summit 2026 banner, themed ‘Welfare for All | Happiness of All’, as authorities seek to prevent financial fraud linked to the high-profile gathering.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Saudi Arabia steps into global AI leadership to shape AI future

The Global Partnership on Artificial Intelligence (GPAI), a multilateral initiative hosted by the OECD and launched by the G7, has officially welcomed Saudi Arabia as a new member. The move reflects the Kingdom’s commitment to shaping global AI governance and ethical technology use.

Accession is led by the Saudi Data and Artificial Intelligence Authority and supported by Crown Prince Mohammed bin Salman. Joining GPAI aligns with Vision 2030, which aims to localise advanced technologies and boost the digital economy’s contribution to GDP.

Through membership in GPAI, which unites over 40 countries, Saudi Arabia will help establish international AI standards, promote human-centric and responsible AI development, and strengthen global cooperation in the sector.

Officials also anticipate that the move will attract high-quality international investment, leveraging the Kingdom’s expanding regulatory framework and growing AI and data ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deutsche Bank expands digital asset plans 

The German banking giant has applied for a digital asset custody licence from BaFin, marking a significant step in its expansion into cryptocurrency services. The move positions Deutsche Bank to offer safekeeping solutions for clients seeking exposure to digital assets.

Plans form part of a broader strategy to build a dedicated digital assets division, according to David Lynne, a commercial banking executive. DWS Group’s initiatives highlight rising institutional interest in crypto partnerships in Germany.

Previous experimentation includes a tokenised investment platform developed in Singapore with Memento Blockchain, enabling access to digital asset funds through fiat on-ramps.

Activity mirrors wider domestic momentum, as Deutsche WertpapierService Bank has already launched crypto infrastructure linking traditional and digital accounts.

Regulatory clarity and growing client demand appear to be key drivers, with Deutsche Bank signalling a cautious yet deliberate approach to integrating cryptocurrencies into its mainstream banking services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Secure quantum-safe optical transport strengthens Japan’s AI data center infrastructure

Nokia and KDDI Corporation demonstrated quantum-safe optical transport at Sakai Data Center, supporting advanced AI workloads. The network aims to deliver secure, uninterrupted data transfer while protecting sensitive AI operations.

The demonstration showcases KDDI’s scalable AI-ready infrastructure for real-time training, inference, and analytics. Quantum-safe encryption and resilient transport protect customer data and critical infrastructure across Japan’s distributed data centres.

Using Nokia’s 1830 Photonic Service Switch (PSS) and 1830 Security Management Server (SMS), the partners validated high-capacity, secure optical connectivity. The solution delivers privacy, reliability, and fast quantum-safe encryption for modern AI workloads.

Executives from both companies emphasised the importance of secure, scalable networks in enabling AI-driven services. Nokia and KDDI will continue advancing quantum-safe data centre connectivity, supporting Japan’s digital infrastructure and key enterprise applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China sets new record in rare disease AI diagnosis

A Chinese research team has developed an AI-powered system, DeepRare, to diagnose rare diseases with unprecedented accuracy.

The project, led by Shenhua Hospital and the university’s School of Artificial Intelligence, has already attracted over 1,000 specialised users from more than 600 medical and research institutions worldwide.

Tests show DeepRare achieves 57.18 percent accuracy using only clinical data, marking a 24-point improvement over previous models. Including genetic data raises accuracy above 70 percent, showing potential to improve diagnosis in areas without advanced testing.

The system draws on an extensive knowledge base of medical literature and real-world cases. Its cycle of hypothesis, validation, and self-review boosts reliability and fills reasoning gaps, surpassing the limits of traditional AI models.

By enhancing transparency and precision, DeepRare offers a practical tool for clinicians facing the persistent challenge of identifying rare diseases, potentially setting a new global standard for AI-assisted diagnostics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Generative AI presents the biggest data-risk challenge in history

Cybersecurity specialists warn that generative AI systems, such as large language models, are creating a data risk frontier far larger than that posed by previous digital innovations.

Because these models are trained on extensive datasets drawn from web pages, internal documents, email corpora and proprietary sources, they can unintentionally memorise or regenerate sensitive information, increasing the risk of exposure.

The article highlights several core concerns. Data leakage and memorisation, where AI models can repeat or infer private data if training processes are not tightly controlled.

Amplification of poor hygiene, when generative tools can magnify the reach of bad actors by automating phishing, social engineering, and malware generation at scale.

Compounding breach impact, if an AI model is trained on stolen or leaked data, it could internalise and regurgitate that information without detection, entrenching harm. Cloud and access governance gaps that allow organisations to adopt AI without robust access controls and encryption may widen their attack surface.

The author calls for revised data governance frameworks, including strict training data provenance, auditability, encryption, minimisation and purpose limitation, to mitigate what is described as ‘the biggest data risk in history.’

Recommendations also include accountability measures for models, continuous monitoring, and legislative action to align AI development with privacy and security principles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quantum computing breakthrough slows information loss

Chinese scientists have observed and controlled a rare intermediate state in a quantum system, effectively slowing quantum chaos. Using the 78-qubit Chuang Tzu 2.0 superconducting processor, researchers demonstrated how a temporary stable phase can be extended or shortened.

The team identified a prethermalisation plateau, a brief period during which the system resists disorder before rapidly descending into full complexity. Careful adjustment of control sequences enabled scientists to tune the rate of quantum decoherence and control how information spreads.

Findings, published in Nature, offer a potential window for preserving fragile quantum information. Longer coherence times could significantly improve the reliability of quantum computing and error correction methods.

Researchers say the work also highlights the advantage of quantum processors in simulating phenomena too complex for classical supercomputers. Applications may range from drug discovery and advanced materials research to next-generation secure communications.

Continued development of larger and more powerful quantum chips is now underway. Mastering such transitional states will be crucial to unlocking the full potential of quantum technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Digital addiction in Italy sparks debate over social media bans

Italy has warned that digital addiction among teenagers is rising sharply, as health authorities link excessive social media and gaming use to family and educational challenges. Officials say bans alone will not resolve the issue.

According to Italy’s National Institute of Health, about 100,000 young people aged 15 to 18 are at risk of social media addiction. A further 500,000 are estimated to suffer from gaming disorder, recognised by the World Health Organisation as a medical condition.

A survey by digital ethics group Social Warning found that 77 percent of Italian teenagers consider themselves addicted to their devices. However, many say they lack the tools or support to change their behaviour.

Research by ‘Con i Bambini’, which funds projects tackling educational poverty in Italy, links digital dependency to isolation and strained parental relationships. The organisation says legislative measures can protect minors but cannot replace structured education and family support.

The debate extends across the EU. The European Parliament has called for a minimum age of 16 for social media platforms, while France, Italy, and Spain are considering national restrictions. Experts argue that prevention and digital literacy must complement regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Fake Google Forms phishing campaign targets job seekers

A phishing campaign is targeting job seekers with fake Google Forms pages designed to harvest account credentials. Attackers are using a spoofed domain, forms.google.ss-o[.]com, to mimic the legitimate Google Forms service and trick victims into signing in.

The fraudulent pages advertise a Customer Support Executive role and prompt applicants to enter personal details before clicking a ‘Sign in’ button. Victims are then redirected to id-v4[.]com/generation.php, a domain previously linked to credential harvesting campaigns.

Researchers identified the operation as part of a broader wave of job-themed phishing attacks. The attackers used a script called generation_form.php to create personalised tracking links and implemented redirects to evade security analysis by sending suspicious visitors to local Google search pages.

Security experts warn that the campaign relies on domain impersonation techniques, including the use of ‘ss-o’ to resemble ‘single sign-on’. The fake site reproduces Google branding elements and standard disclaimers to increase credibility.

Users are advised to avoid clicking unsolicited job links, verify opportunities through official channels, and enable multi-factor authentication. Password managers and real-time anti-malware tools can also reduce exposure to credential theft.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EVMbench from OpenAI, Paradigm and OtterSec measures AI smart contract risks

OpenAI, with Paradigm and OtterSec, introduced EVMbench to test how AI agents detect, patch, and exploit smart contract flaws. The benchmark draws on 120 real vulnerabilities from 40 blockchain projects to better reflect live conditions.

Researchers report that leading agents can now discover and exploit end-to-end vulnerabilities in live blockchain instances. Over six months, exploit success rates rose sharply, prompting both praise for improved auditing capabilities and concern over the rapid scaling of offensive skills.

EVMbench evaluates agents across three modes: detect, patch, and exploit. Each stage reflects increasing technical complexity and mirrors the responsibilities faced in production blockchain environments, where contracts are often immutable, and errors can lead to irreversible losses.

Recent incidents underline the stakes. A vulnerability in AI-generated Solidity code reportedly mispriced an asset, triggering liquidations and losses. Such cases highlight the risks of deploying AI-written financial logic without rigorous human review and governance safeguards.

While EVMbench advances measurement of AI capabilities, it remains limited to curated vulnerabilities and sandboxed conditions. As blockchain adoption expands and criminal misuse evolves, researchers stress the need for responsible AI development alongside stronger innovative contract security practices.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!