AI model raises security risks, prompting release concerns, reports say

Anthropic is reported to have declined to release its latest AI model, Mythos, citing potential risks to global cybersecurity. The system is reported to be capable of identifying vulnerabilities across major operating systems and web browsers, raising concerns about possible misuse.

Reports indicate that the company is investigating claims that unauthorised actors may have accessed the model. A reported breach has intensified debate about whether technology firms can maintain control over increasingly powerful AI systems as development accelerates.

The Mythos model is described as part of a new class of AI tools capable of analysing complex digital environments and identifying weaknesses at scale. Such capabilities could support cybersecurity efforts, but may also present risks if exploited by malicious actors.

The case has contributed to discussions within the technology sector about balancing innovation with efforts to manage potential risks to digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Singapore’s HTX signs agreements to advance public safety technologies

The Home Team Science and Technology Agency has signed 10 agreements with partners across government, industry and academia to advance public safety technologies. The announcement was made at MTX 2026.

The partnerships focus on areas including AI, space technology and cybersecurity, aiming to accelerate development of next-generation capabilities for public safety operations.

Several agreements involve industry collaboration to apply commercial innovations, while others expand research links with academic institutions to deepen expertise in areas such as forensics and autonomous systems.

HTX said the partnerships will strengthen collaboration, innovation and knowledge sharing across the public safety ecosystem. The agreements were announced at an event in Singapore.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Study examines trust and fraud prevention in AI-enabled banking in Bangladesh

A new non-peer-reviewed preprint examines how AI is shaping e-banking in Bangladesh, focusing on consumer decision-making, ethical trust, and fraud prevention.

The paper links AI adoption in digital banking to customer experience, risk management, process automation, financial inclusion and regulatory compliance, arguing that these factors are increasingly important as Bangladesh’s financial sector becomes more digital.

A study that uses a narrative literature review of recent research from 2024 and 2025 and builds its conceptual model on the UTAUT2 framework, which is commonly used to explain technology adoption.

The authors extend the model by adding ethical trust and fraud prevention as mediating mechanisms, arguing that consumers are more likely to use AI-enabled banking services when they see them as useful, secure, transparent and fair.

Ethical trust is treated as a central part of adoption. The paper identifies transparency, algorithmic fairness, data privacy, reliability, accountability and digital inclusion as key factors shaping how users respond to AI in banking.

It also notes that explainable AI tools and localised interfaces, including Bengali-language systems, could help reduce uncertainty for users with lower digital literacy.

Fraud prevention is presented as a critical enabler of consumer confidence. The authors point to real-time monitoring, anomaly detection, secure authentication, biometric e-KYC and explainable fraud alerts as tools that can reduce perceived risk.

Additionally, they argue that AI systems should not only detect fraud effectively, but also explain decisions clearly enough for users to trust them.

The paper also highlights Bangladesh-specific issues, including Islamic banking, Shariah-compliant AI models, rural and urban digital access gaps, and the need for inclusive design. However, the study remains conceptual and has not yet been peer reviewed.

The authors recommend future empirical research with Bangladeshi banking users to test the model across income levels, regions, generations and gender groups.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK NCSC publishes framework on adversarial attacks against AI systems

The UK’s National Cyber Security Centre has published a paper on adversarial attacks against machine learning and AI, setting out a framework for understanding attacks that target the operation of ML models. The paper introduces a common language intended to support awareness, threat modelling, and collaboration on AI security.

The NCSC says ML systems present a larger attack surface than traditional software because of rapid development cycles, unique architectures, large model sizes, and the widespread use of open-source components. It distinguishes adversarial machine learning attacks from broader cyberattacks by focusing on those that exploit vulnerabilities specific to the architecture, training, or operation of ML models.

The paper defines seven attack classes:

  • model characterisation
  • model inversion
  • training data poisoning
  • malicious model training
  • model input manipulation
  • model artifact manipulation
  • model hardware attacks

It says these attacks can occur across development, training, and deployment, and may target both hardware and software components.

The NCSC also maps those attack classes against eight potential goals of a malicious actor, including reconnaissance, degrading performance, wasting resources, embedding hidden behaviours, evading detection, extracting data, and gaining wider system access. The table on pages 11-12 links each class to one or more of those goals.

The paper argues that standard cybersecurity controls remain foundational, but says ML-specific weaknesses often require dedicated mitigations that are not yet mature or widely deployed.

It calls for more research into underdeveloped areas, such as model-hardware attacks and malicious model training, and recommends greater use of frameworks and guidance from the NCSC, ETSI, and the UK government’s AI cybersecurity code of practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Brazil’s Ceará state introduces AI assistant for document review

The Junta Comercial do Estado do Ceará has launched an AI-powered document analysis assistant, marking the first public-facing AI service by the Government of the State of Ceará in Brazil. The initiative was announced through an official statement.

The tool is integrated into the Jucec services portal and acts as a pre-analysis system. It reviews documents, cross-checks data and identifies inconsistencies before formal submission.

Officials say the AI system allows users to correct errors in advance, reducing delays and improving efficiency. The analysis is conducted quickly and clearly highlights issues for businesses and accountants.

The initiative is part of wider efforts to modernise public services and support digital transformation in Brazil.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Powerful Gemini update turns simple prompts into ready-to-use results

Gemini can now generate downloadable and ready-to-share files directly in chat across a wide range of formats, including PDF, Microsoft Word, Excel, Google Docs, Sheets, and Slides.

The new feature is meant to remove the extra steps that often follow AI-assisted brainstorming, such as copying content into other applications and reformatting it manually. Instead, users can ask Gemini to create a structured file that is already formatted and ready to download or export to Google Drive.

Supported formats include Google Workspace files, PDF, DOCX, XLSX, CSV, LaTeX, TXT, RTF, and Markdown. The company says the feature is now available globally to all Gemini app users.

Possible uses include turning budget plans into spreadsheets, organising rough ideas into structured documents, converting long discussions into concise reports, and generating PDF study guides from uploaded lecture notes.

Why does it matter?

What changes here is not simply that Gemini can create more file types, but that it moves AI one step closer to replacing part of the software workflow itself. Instead of using AI to generate rough text and then finishing the task manually in Word, Excel, or Google Docs, users can now get output in a format that is already structured for immediate use.

That may reduce friction between prompting and execution, making AI more useful in everyday work, study, and administration. In practical terms, the update pushes Gemini further from being just a conversational assistant towards becoming a tool that can produce finished digital outputs people can actually work with.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China pushes AI self-reliance while expanding global cooperation

Chinese Vice Premier Ding Xuexiang has reiterated China’s emphasis on AI self-reliance while also calling for deeper international cooperation, underscoring a dual approach to technology policy amid rising global competition. Speaking at the opening of the 9th Digital China Summit, he presented AI as an important part of China’s wider modernisation agenda.

Ding said China should strengthen self-reliance and independent innovation in AI, arguing that the sector must be able to withstand external pressure and attempts at suppression. He also emphasised application-driven development, calling for faster integration of AI into the real economy to support productivity and industrial transformation.

Alongside those domestic priorities, he called for a more collaborative innovation ecosystem, including closer coordination across the AI industry chain. Internationally, he advocated open and mutually beneficial cooperation, with particular emphasis on computing power, data, and talent.

Regulation also featured prominently in the speech. Ding said AI development must remain safe and controllable, with stronger oversight to ensure the technology serves human interests and remains under human control. Taken together, the message reflects China’s broader effort to balance technological sovereignty with continued international engagement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Latvia shows average AI tool adoption levels

Recent data from Eurostat and the Central Statistical Bureau of Latvia highlights that around one-third of people in Latvia use AI tools. Latvian Public Media reports that usage broadly matches the EU average.

In Latvia, 35.1 percent of internet users reported using AI in 2025, slightly above the EU figure of 33 percent. Adoption is highest among younger people, with nearly three-quarters of those aged 16 to 24 using such tools.

Usage varies across demographics, with higher rates among educated users and employed individuals. Men use AI slightly more than women, while regional differences show stronger uptake in the Riga area.

Many non-users say they see no need for AI, while others cite a lack of skills or awareness. The findings were reported based on official data in Latvia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Digital Dubai rolls out AI workforce programme across public sector

Digital Dubai has launched the AI Workforce Transformation Programme to train 50,000 government employees in AI skills. The initiative is being delivered with the Dubai Government Human Resources Department and the Dubai Centre for Artificial Intelligence.

The programme aims to equip staff with practical knowledge to apply AI in public services and internal processes. It includes tailored training tracks based on job roles, from leadership to general employees.

Officials say the initiative will improve productivity, support innovation and enable more efficient service delivery. It also forms part of wider efforts to strengthen AI adoption across government operations.

The programme is designed to build long-term institutional capabilities and support a technology-driven government model. The initiative was launched by Digital Dubai in Dubai.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI research collaboration expands as Google plans campus in South Korea

A major step in global AI expansion is underway as Google prepares to establish its first overseas AI campus in Seoul within 2026. The initiative reflects a broader effort to deepen collaboration between global technology firms and regional innovation ecosystems.

The project is being developed in coordination with Google DeepMind and institutions in South Korea, with a dedicated research team expected to support joint development. Around ten specialists will lead technical cooperation, strengthening links between academia, startups and industry.

A central pillar of this collaboration is the K-Moonshot Project, which applies AI to challenges in biotechnology, climate and energy. Alongside this, an agreement with the Ministry of Science and ICT aims to enhance research capabilities and develop specialised human capital in advanced technologies.

The initiative highlights a growing convergence between national innovation strategies and global AI leadership, signalling a shift towards more distributed and collaborative research infrastructures across regions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!