Victorian officials outline approach to managing AI risks in public sector

Ian Pham at the Victorian Managed Insurance Authority (VMIA) outlined approaches to managing AI adoption during the PSN Victorian Government Cyber Security Showcase. Organisations face the challenge of adopting AI while maintaining effective risk management as these systems become more embedded in government operations.

Cybersecurity teams have traditionally operated with a risk-averse approach focused on minimising threats. Such an approach can slow innovation when applied to AI systems used in public sector environments.

A shift towards managing risk in line with organisational objectives is presented as necessary. This includes prioritising relevant risks and moving from reactive responses towards supporting decision-making processes.

AI adoption involves secure environments for experimentation with defined guardrails, including synthetic or non-sensitive data, monitoring mechanisms, usage conditions, and identity and access controls. Exposure can then be increased gradually, supported by governance and continuous reassessment.

Risks linked to AI systems include data leakage, privacy concerns, unauthorised use, and data quality issues. These risks are described as requiring visibility and management, alongside organisational awareness and engagement to support confidence in AI use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI model raises security risks, prompting release concerns, reports say

Anthropic is reported to have declined to release its latest AI model, Mythos, citing potential risks to global cybersecurity. The system is reported to be capable of identifying vulnerabilities across major operating systems and web browsers, raising concerns about possible misuse.

Reports indicate that the company is investigating claims that unauthorised actors may have accessed the model. A reported breach has intensified debate about whether technology firms can maintain control over increasingly powerful AI systems as development accelerates.

The Mythos model is described as part of a new class of AI tools capable of analysing complex digital environments and identifying weaknesses at scale. Such capabilities could support cybersecurity efforts, but may also present risks if exploited by malicious actors.

The case has contributed to discussions within the technology sector about balancing innovation with efforts to manage potential risks to digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Singapore’s HTX signs agreements to advance public safety technologies

The Home Team Science and Technology Agency has signed 10 agreements with partners across government, industry and academia to advance public safety technologies. The announcement was made at MTX 2026.

The partnerships focus on areas including AI, space technology and cybersecurity, aiming to accelerate development of next-generation capabilities for public safety operations.

Several agreements involve industry collaboration to apply commercial innovations, while others expand research links with academic institutions to deepen expertise in areas such as forensics and autonomous systems.

HTX said the partnerships will strengthen collaboration, innovation and knowledge sharing across the public safety ecosystem. The agreements were announced at an event in Singapore.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Study examines trust and fraud prevention in AI-enabled banking in Bangladesh

A new non-peer-reviewed preprint examines how AI is shaping e-banking in Bangladesh, focusing on consumer decision-making, ethical trust, and fraud prevention.

The paper links AI adoption in digital banking to customer experience, risk management, process automation, financial inclusion and regulatory compliance, arguing that these factors are increasingly important as Bangladesh’s financial sector becomes more digital.

A study that uses a narrative literature review of recent research from 2024 and 2025 and builds its conceptual model on the UTAUT2 framework, which is commonly used to explain technology adoption.

The authors extend the model by adding ethical trust and fraud prevention as mediating mechanisms, arguing that consumers are more likely to use AI-enabled banking services when they see them as useful, secure, transparent and fair.

Ethical trust is treated as a central part of adoption. The paper identifies transparency, algorithmic fairness, data privacy, reliability, accountability and digital inclusion as key factors shaping how users respond to AI in banking.

It also notes that explainable AI tools and localised interfaces, including Bengali-language systems, could help reduce uncertainty for users with lower digital literacy.

Fraud prevention is presented as a critical enabler of consumer confidence. The authors point to real-time monitoring, anomaly detection, secure authentication, biometric e-KYC and explainable fraud alerts as tools that can reduce perceived risk.

Additionally, they argue that AI systems should not only detect fraud effectively, but also explain decisions clearly enough for users to trust them.

The paper also highlights Bangladesh-specific issues, including Islamic banking, Shariah-compliant AI models, rural and urban digital access gaps, and the need for inclusive design. However, the study remains conceptual and has not yet been peer reviewed.

The authors recommend future empirical research with Bangladeshi banking users to test the model across income levels, regions, generations and gender groups.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

European Parliament set to push for faster Digital Markets Act compliance proceedings

Ahead of the review of the Digital Markets Act, the European Parliament is set to call for faster compliance proceedings and closer scrutiny of AI-driven search tools and cloud services.

In a draft resolution, MEPs are expected to urge the Commission to enforce the Digital Markets Act quickly and consistently, while adapting to technological change without reopening the law’s core objectives.

The text highlights the growing strategic importance of cloud computing services and the rising use of AI-driven search tools, arguing that both require closer scrutiny under the Digital Markets Act framework.

MEPs also warn against external political pressure aimed at weakening the law. They are expected to call on the Commission to make full use of its enforcement tools, including periodic penalty payments, to stop companies from bypassing it, regardless of where they are based.

The Digital Markets Act sets obligations for the largest digital companies providing key platform services in the EU, with the aim of supporting fair competition in digital markets. The draft resolution comes after the Commission’s first non-compliance decisions and fines under the law, including action against Meta over its ‘pay or consent’ advertising model and against Apple over anti-steering obligations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft report highlights growing use of AI in healthcare systems

Healthcare systems worldwide are entering a new phase of digital transformation, driven by the rapid adoption of AI, as highlighted in a Microsoft report.

Growing administrative pressure, complex workflows and rising patient demand are pushing hospitals to integrate AI not as a future concept, but as an immediate operational tool to improve efficiency and care quality.

Across different regions, AI is being deployed to reduce clinician workload and streamline hospital operations.

In the United States, AI-assisted documentation tools are helping medical staff reduce time spent on administrative tasks, allowing them to focus more on patient care. Similar approaches are being applied globally to improve workflow efficiency and support overstretched healthcare professionals.

In emerging and developed markets alike, AI is also strengthening system resilience and accessibility. Applications range from improving pharmacy inventory management in Kenya to enhancing cybersecurity in Japan’s hospital networks following ransomware attacks.

In Spain, AI-based diagnostic tools are helping accelerate the detection of rare diseases, improving both speed and accuracy of medical decisions.

These developments highlight a broader shift in healthcare systems towards AI-driven infrastructure that supports not only clinical outcomes but also operational stability and data security.

Collaboration among healthcare providers, technology companies, and policymakers is becoming increasingly important to ensure that AI integration remains effective, responsible, and scalable.

Why does it matter? 

AI-driven healthcare transformation is reshaping how modern health systems operate at a structural level, shifting the focus from reactive treatment to more efficient, data-informed, and system-wide care delivery.

As hospitals increasingly rely on digital tools, the balance between human clinical expertise and automated support systems is being redefined.

From a broader perspective, the impact extends beyond hospitals and patients, influencing national health resilience, cost efficiency, and equitable access to care.

Countries that successfully integrate AI into healthcare infrastructure are likely to gain significant advantages in service quality, system sustainability, and their ability to respond to future public health challenges.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Brazil’s Ceará state introduces AI assistant for document review

The Junta Comercial do Estado do Ceará has launched an AI-powered document analysis assistant, marking the first public-facing AI service by the Government of the State of Ceará in Brazil. The initiative was announced through an official statement.

The tool is integrated into the Jucec services portal and acts as a pre-analysis system. It reviews documents, cross-checks data and identifies inconsistencies before formal submission.

Officials say the AI system allows users to correct errors in advance, reducing delays and improving efficiency. The analysis is conducted quickly and clearly highlights issues for businesses and accountants.

The initiative is part of wider efforts to modernise public services and support digital transformation in Brazil.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New MIT research hub targets future of advanced computation

IBM and the MIT Schwarzman College of Computing have launched the MIT-IBM Computing Research Lab, expanding their long-running partnership into a broader research agenda focused on AI, algorithms, and quantum computing.

The initiative builds on the earlier MIT-IBM Watson AI Lab and reflects the rapid shift towards AI deployment and emerging quantum technologies.

The lab aims to explore the convergence of AI and quantum systems, including hybrid computing models that combine classical infrastructure with next-generation quantum hardware.

Research priorities include efficient AI architectures, advanced optimisation methods, and new algorithmic frameworks designed to improve reliability, transparency, and real-world applicability of machine learning systems.

Alongside AI development, the lab will focus on quantum algorithms for complex scientific problems in fields such as chemistry, biology, and materials science. Work will also address the mathematical foundations of modelling dynamic systems, with potential applications ranging from improved weather prediction to financial forecasting and supply chain optimisation.

Leaders from both MIT and IBM describe the lab as a platform for shaping the next generation of computing systems through integrated advances in AI and quantum technologies.

Why does it matter? 

The launch of the MIT-IBM Computing Research Lab signals a broader shift in how foundational computing breakthroughs are now being shaped through close academic–industry collaboration.

As AI and quantum computing converge, the boundaries of what machines can model, predict, and optimise are being fundamentally redefined.

From a wider perspective, these developments could reshape entire sectors, including healthcare, finance, climate science, and global logistics, by enabling faster and more accurate problem-solving at scales that classical systems cannot handle.

The direction of this research also matters for technological sovereignty, as countries and institutions compete to lead in next-generation computing capabilities that will underpin future economic and scientific power.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Powerful Gemini update turns simple prompts into ready-to-use results

Gemini can now generate downloadable and ready-to-share files directly in chat across a wide range of formats, including PDF, Microsoft Word, Excel, Google Docs, Sheets, and Slides.

The new feature is meant to remove the extra steps that often follow AI-assisted brainstorming, such as copying content into other applications and reformatting it manually. Instead, users can ask Gemini to create a structured file that is already formatted and ready to download or export to Google Drive.

Supported formats include Google Workspace files, PDF, DOCX, XLSX, CSV, LaTeX, TXT, RTF, and Markdown. The company says the feature is now available globally to all Gemini app users.

Possible uses include turning budget plans into spreadsheets, organising rough ideas into structured documents, converting long discussions into concise reports, and generating PDF study guides from uploaded lecture notes.

Why does it matter?

What changes here is not simply that Gemini can create more file types, but that it moves AI one step closer to replacing part of the software workflow itself. Instead of using AI to generate rough text and then finishing the task manually in Word, Excel, or Google Docs, users can now get output in a format that is already structured for immediate use.

That may reduce friction between prompting and execution, making AI more useful in everyday work, study, and administration. In practical terms, the update pushes Gemini further from being just a conversational assistant towards becoming a tool that can produce finished digital outputs people can actually work with.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

United Nations warns AI-driven advertising could deepen information crisis

The United Nations has warned that the rapid adoption of AI in advertising could deepen a global information integrity crisis. With worldwide advertising spending now exceeding $1 trillion annually, concerns are growing over how automated systems influence what users see, trust, and engage with online.

A briefing by the Department of Global Communications and the Conscious Advertising Network places advertising at the centre of the digital information ecosystem. It argues that advertising helps fund and shape the systems that influence what people see and believe, while AI-driven tools are increasingly being used in media buying and content generation in ways that can amplify disinformation, hate speech, and opaque decision-making.

Transparency gaps in AI advertising systems are also raising concerns over fraud, inefficiency, and declining trust in digital platforms. The report warns that these pressures can weaken independent journalism and reduce advertising effectiveness as confidence in online environments continues to erode.

UN officials and industry representatives are calling for stronger governance, clearer oversight of AI supply chains, and closer cooperation between regulators, advertisers, and civil society. The core message is that without stronger guardrails, AI may accelerate the breakdown of information ecosystem integrity rather than simply improve commercial performance.

Why does it matter?

AI is becoming embedded in systems that shape the online flow of information, which means advertising is no longer only a commercial mechanism but also a force affecting public perception and trust. As automation expands without clear oversight, risks can spread beyond brand safety into disinformation, media sustainability, and democratic discourse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!