Australia’s ASIC urges cyber resilience as frontier AI raises risk

The Australian Securities and Investments Commission has urged regulated entities to strengthen cyber resilience, warning that frontier AI could intensify cyber risks by exposing vulnerabilities at greater speed, scale and sophistication.

In an open letter to industry, ASIC said licensees and market participants should act now to improve their cybersecurity fundamentals rather than wait as advanced AI tools reshape the threat environment. The regulator said cyber resilience should be treated as a core licensing obligation, not solely as an IT issue.

ASIC Commissioner Simone Constant said frontier AI creates opportunities but also materially increases cyber risk, including by exposing weaknesses faster than many organisations realise. She warned that vulnerabilities once seen as isolated could have system-wide effects and enable previously out-of-reach forms of exploitation for many malicious actors.

The letter follows ASIC’s recent court outcome against FIIG Securities Limited, which the regulator said reinforced the need for cyber risk management controls to be demonstrably effective and proportionate to a business’s size, nature and complexity.

ASIC is urging entities to reassess cyber plans, identify and protect critical systems, reduce exposure to untrusted networks, review user access, patch systems promptly, strengthen incident response planning and manage third-party risks. It also says organisations should use AI defensively where appropriate, including to identify vulnerabilities and secure software before release.

Constant said entities need robust incident response plans and that the underlying principles of cyber risk management remain the same: govern, protect, detect and respond. She also said boards and executives must ensure systems are tested, weaknesses are addressed early, and action is taken before threats can be exploited.

ASIC says entities must table the letter at their ultimate board and risk governance committees. It also encourages regulated entities to use guidance from trusted sources, including the Australian Signals Directorate and the Australian Government’s Cyber Health Check.

Why does it matter?

ASIC’s warning shows that financial regulators are beginning to treat frontier AI as a force multiplier of cyber risk, not just a technology issue. By framing cyber resilience as a licensing and board-level governance obligation, the regulator is signalling that firms may be judged not only on whether they suffer cyber incidents, but on whether their controls, escalation processes and resilience planning are proportionate to an AI-accelerated threat environment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI introduces a trusted contact safety feature in ChatGPT

OpenAI has started rolling out Trusted Contact, an optional safety feature in ChatGPT designed to help connect adult users with real-world support during moments of serious emotional distress.

The feature allows users to nominate one trusted adult, such as a friend, family member or caregiver, who may receive a notification if OpenAI’s automated systems and trained reviewers detect that the user may have discussed self-harm in a way that indicates a serious safety concern.

OpenAI said the feature is intended to add another layer of support alongside existing safeguards in ChatGPT, including prompts that encourage users to contact crisis hotlines, emergency services, mental health professionals, or trusted people when appropriate. The company stressed that Trusted Contact does not replace professional care or crisis services.

Users can add a trusted contact through ChatGPT settings. The contact receives an invitation explaining the role and must accept it within one week before the feature becomes active. Users can later edit or remove their trusted contact, while the trusted contact can also remove themselves.

If ChatGPT detects a possible serious self-harm concern, the user is informed that their trusted contact may be notified and is encouraged to reach out directly. A small team of specially trained reviewers then assesses the situation before any notification is sent.

OpenAI said notifications are intentionally limited and do not include chat details or transcripts. Instead, they share the general reason that self-harm came up in a potentially concerning way and encourage the trusted contact to check in. The company said every notification undergoes human review and aims to review safety notifications in under one hour.

The feature was developed with guidance from clinicians, researchers and organisations specialising in mental health and suicide prevention, including the American Psychological Association. OpenAI said Trusted Contact forms part of broader efforts to improve how AI systems respond to people experiencing distress and connect them with real-world care, relationships and resources.

Why does it matter?

Trusted Contact points to a broader shift in AI safety away from content moderation alone toward real-world support mechanisms for users in moments of vulnerability. As conversational AI systems become part of everyday personal reflection and emotional support, companies face growing pressure to define when and how they should intervene, how much privacy to preserve, and what role human review should play in high-risk situations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

US and China reportedly weigh AI risk talks ahead of leaders’ summit

The United States and China are considering launching official discussions on AI risk management, The Wall Street Journal reported, citing people familiar with the matter.

According to the report, the White House and the Chinese government are also considering whether to place AI on the agenda for a planned summit in Beijing between US President Donald Trump and Chinese President Xi Jinping. If agreed, the talks would mark the first AI-specific engagement between the two governments under the current US administration.

The possible dialogue could focus on risks linked to advanced AI systems, including unexpected model behaviour, autonomous military applications and misuse by non-state actors using powerful open-source tools, people familiar with the discussions told the newspaper. The report said Washington is waiting for Beijing to designate a counterpart for the talks.

The WSJ reported that US Treasury Secretary Scott Bessent is leading the US side, while Chinese Vice Finance Minister Liao Min has been involved in discussions on setting up such a channel. The newspaper added that the two presidents would ultimately decide whether AI appears on the formal summit agenda.

Liu Pengyu, spokesperson for the Chinese Embassy in Washington, was cited as saying that China is ready to engage in communication on AI risk mitigation. Analysts have raised the possibility that any future dialogue could support crisis-management tools, including an AI hotline between senior leaders.

The report places the latest deliberations in the context of earlier US-China engagement on AI. In 2023, then US President Joe Biden and Xi launched a formal AI dialogue, and both sides later said humans, not AI, would retain authority over nuclear-launch decisions. The WSJ said the earlier process produced limited results, but AI has remained a high-level focus in bilateral relations.

Non-governmental discussions have also reportedly continued in parallel, including exchanges involving former Microsoft research executive Craig Mundie and Chinese counterparts from Tsinghua University and major AI companies. Participants cited by the newspaper said those exchanges have focused on frontier-model safety, technical guardrails and broader questions of strategic stability.

Why does it matter?

A formal AI risk channel between Washington and Beijing would signal that both governments see advanced AI as a strategic stability issue, not only an economic or technological race. Even brief talks could matter if they create channels for crisis communication about military AI, frontier-model failures, or misuse by non-state actors. However, because the discussions are still only reported as under consideration, the significance lies in the possibility of a risk-management mechanism, not in any confirmed diplomatic breakthrough.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WEF report says AI is reshaping cybersecurity defence

Advanced AI models are reshaping cybersecurity by accelerating both offensive and defensive capabilities, forcing organisations to rethink how they detect, assess and respond to cyber threats.

A new World Economic Forum report argues that AI is becoming a defining force in cybersecurity, with organisations increasingly moving from pilot projects to operational deployment. According to the WEF, AI is already being used to improve vulnerability identification, threat detection, response speed and resilience.

The report highlights how AI can help security teams process large volumes of data, detect threats faster and support more efficient responses. At the same time, it warns that threat actors are also using AI to automate deception, generate malware and scale attacks at machine speed.

WEF’s analysis says the growing speed and scale of AI-enabled cyber operations are putting pressure on traditional cybersecurity models. Instead of relying mainly on prevention and scheduled patching cycles, organisations are being pushed towards continuous detection, automated response, stronger access controls and more resilient infrastructure.

The report also stresses that AI’s value in cybersecurity depends on strategy, governance and human oversight. Rather than treating AI as a standalone tool, organisations are encouraged to test use cases carefully, build appropriate safeguards and invest in the skills and processes needed to defend at machine speed.

Why does it matter?

AI is changing cybersecurity on both sides of the equation. It can lower the barriers for faster and more scalable attacks, but it can also help defenders improve detection, response and resilience. The wider significance is that cybersecurity strategies built around periodic assessment and manual response may become less effective as AI-driven threats and defences operate at greater speed and scale.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

European Commission updates guidance on generative AI use in research

The European Commission has updated the ERA Living Guidelines on the responsible use of generative AI in research, reflecting the growing use of AI tools across scientific work. The revised guidance aims to support researchers, research organisations and funding bodies in adopting generative AI while maintaining core principles of research integrity.

The guidelines emphasise reliability, honesty, respect and accountability, including transparency over AI use, protection of privacy and confidential information, and responsibility for research outputs. They also stress that researchers remain ultimately responsible for scientific output and should verify AI-generated results.

New recommendations address risks linked to the use of generative AI by third parties, including in meetings, note-taking, summaries and document overviews, where confidential information, data protection or intellectual property rights may be affected. The guidelines encourage researchers and organisations to inform third parties about the use of such tools and related risks.

A specific addition concerns the risk of ‘hidden prompts’, where instructions may be secretly embedded in documents or inputs to influence generative AI tools. The guidelines call on research funding organisations to remain aware of such risks, set rules prohibiting manipulation where relevant, and introduce appropriate safeguards in IT systems used to process information.

Developed through the European Research Area Forum, the guidelines are intended as a non-binding supporting tool for the research community. The Commission says they will be updated regularly and that users can continue to provide feedback as generative AI and the surrounding policy landscape evolve.

Why does it matter?

Generative AI is becoming part of everyday research workflows, from drafting and summarising to proposal preparation and document analysis. The updated guidelines show that research integrity risks now extend beyond individual misuse to organisational processes, third-party tools and hidden technical behaviours that may affect scientific judgement. Shared guidance across the European Research Area can help institutions adopt AI without weakening transparency, accountability or trust in research.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our  chatbot!  

India and France discuss expanding AI and space cooperation

India and France have discussed expanding cooperation in space, AI, applied mathematics and advanced technologies following a bilateral meeting between Indian Minister of State for Science and Technology Dr Jitendra Singh and French Minister for Higher Education, Research and Space Philippe Baptiste.

The talks reviewed the countries’ growing strategic partnership in science, technology and space, with the 2026 Indo-French Year of Innovation identified as an opportunity to deepen collaboration in emerging technology fields.

Both sides discussed stronger links between Indian and French research institutions, including initiatives related to AI, advanced materials and digital sciences. Space cooperation also featured prominently, building on long-standing collaboration between the Indian Space Research Organisation and France’s Centre National d’études Spatiales through joint missions such as Megha-Tropiques and SARAL, and ongoing work on TRISHNA.

France also expressed interest in expanding cooperation on human spaceflight, microgravity experiments and ocean-related data-sharing initiatives.

Indian officials highlighted the expansion of the country’s space ecosystem following recent reforms, noting that nearly 400 space start-ups are now active in the sector. The discussions also covered opportunities linked to India’s Deep Ocean Mission and future engagement around the International Space Summit planned in Paris in September 2026.

Why does it matter?

The meeting reflects how AI, space, ocean data and advanced research are increasingly being treated as linked areas of strategic technology cooperation. For India and France, the agenda goes beyond scientific exchange: it connects national innovation ecosystems, space-sector reforms, research partnerships and the use of data-intensive technologies for climate, ocean and public-interest applications.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OECD finds audit institutions are building AI capacity but struggling to scale

Public audit institutions are expanding their use of AI, but most remain at an early stage of adoption, with a significant gap between pilot projects and full operational deployment, according to a new OECD paper.

Drawing on consultations with 15 institutions across 14 countries and the European Union, the paper says AI is being explored to strengthen oversight and improve audit processes in areas such as anomaly detection, document processing, knowledge management and predictive risk assessment.

The OECD says institutional commitment is already visible across several indicators. Among the institutions consulted, 67% reported having a formal AI strategy, 80% had internal AI guidelines or policies, 87% offered AI-related staff training, and 87% had at least one AI tool in production.

However, the paper stresses that maturity levels vary widely and that many tools remain limited in scale or are still being tested. It identifies a gap between experimentation and scalable operational deployment, despite the growing integration of AI into broader digital transformation efforts.

The paper highlights several emerging audit use cases, including machine-learning systems for anomaly detection in procurement and financial records, predictive models to identify entities at higher risk of distress or non-compliance, intelligent document processing for extracting data from unstructured files, and generative AI tools for drafting, summarising and translating documents.

It also points to more specialised applications, such as semantic search, knowledge management, and visual or spatial analysis using satellite imagery, drones or other sensor-based systems.

Despite growing experimentation, the OECD says the main barriers to wider use remain structural. Fragmented data systems, weak interoperability, limited internal technical expertise and uneven digital infrastructure continue to slow progress.

The paper argues that robust data governance, secure and interoperable systems, and stronger in-house development capacity will be critical if public audit bodies are to scale AI responsibly while maintaining transparency, accountability and public trust.

It also stresses that AI is being positioned as a support tool rather than a substitute for auditors. Across the cases reviewed, human oversight remains central, both because of current limitations in explainability and reliability and because audit institutions are treating AI adoption cautiously in high-stakes oversight settings.

The OECD presents the current period as a transitional phase in which public audit institutions are building the foundations needed for broader and more trustworthy use of AI in oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ATxSummit 2026 to focus on AI governance and digital growth in Asia

ATxSummit 2026 will take place in Singapore on 20 and 21 May 2026 as part of Asia Tech x Singapore. Organisers state that the event will convene more than 4,000 participants from over 50 countries, including policymakers, technology companies, researchers, and industry representatives.

The programme will focus on five themes related to AI deployment and governance. These include agentic systems in enterprise operations, AI applications for public-sector and national use, scientific research and embodied intelligence, workforce and organisational changes, and the implementation of AI governance approaches.

Participants include representatives from organisations such as the World Bank Group, NVIDIA, Google, Amazon, and OpenAI. The programme also includes academic and policy discussions involving AI research, security, and digital governance.

The summit will include technical workshops, government roundtables, and the Digital Frontier Forum, focused on AI, deep technology, and digital growth strategies. ATxEnterprise will also take place alongside the summit, with sessions addressing infrastructure investment, digital trust, cross-border connectivity, and responsible AI deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ICESCO and Morocco sign agreement on AI and digital capacity building

The Islamic World Educational, Scientific and Cultural Organisation (ICESCO) and Morocco’s Ministry of Digital Transition and Administrative Reform have signed a memorandum of understanding on cooperation in digital transformation, AI and strategic foresight.

The agreement was signed in Rabat on the sidelines of the African Open Government Conference by ICESCO Director-General Dr Salim M. AlMalik and Dr Amal El Fallah, Minister Delegate to the Head of Government in charge of Digital Transition and Administrative Reform of Morocco.

The memorandum provides for workshops, training programmes and joint seminars aimed at building capacity among public and private sector professionals in digital transformation, AI, strategic foresight and digital diplomacy. It also covers the exchange of expertise and open data, the preparation of reference materials, and research related to future skills and professions in ICESCO member states.

The agreement further includes cooperation with universities and research centres to support a knowledge ecosystem aligned with the requirements of the digital economy. It also refers to innovation laboratories and digital tools for the digitisation, indexing, research and analysis of cultural and scientific heritage materials.

Why does it matter?

The agreement places AI within a broader capacity-building agenda that includes public-sector skills, digital diplomacy, open data, foresight and heritage digitisation. Also, the policy relevance lies in how international organisations and national governments are using AI cooperation not only for technology adoption, but also for institutional readiness and future skills development across member states.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Automation fuels inequality more than productivity gains, study finds

A new study co-authored by economists from Massachusetts Institute of Technology and Yale University finds that automation in the United States has often been driven less by productivity gains and more by firms’ efforts to reduce labour costs.

Rather than replacing workers to maximise efficiency, companies have frequently targeted employees earning a ‘wage premium’, effectively lowering higher-than-average salaries within comparable roles.

The research suggests this pattern has contributed significantly to widening income inequality while delivering only limited productivity improvements.

The analysis, which examines data spanning multiple decades and industries, indicates that automation has disproportionately affected higher-earning workers within affected groups. It also estimates that inefficient automation deployment may have offset a large share of potential productivity gains over time.

Researchers argue that the findings highlight a structural tension in how automation is applied, where short-term cost reduction can take priority over long-term economic efficiency, shaping both wage distribution and overall growth dynamics in the US economy since 1980.

Why does it matter? 

The findings challenge the assumption that automation primarily improves efficiency and productivity, showing instead that firms can strategically use it to reshape wage structures and concentrate economic gains.

From a broader perspective, this helps explain why technological progress has not translated evenly into higher productivity or shared prosperity, while also highlighting how corporate incentives can steer innovation in ways that deepen inequality across labour markets.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!