Generative AI policy updated by Australian Research Council

The Australian Research Council has updated its policy on the use of generative AI in its grants programmes, setting out how the rules apply to applicants, administering organisations, and assessors in the National Competitive Grants Program.

The revised policy has officially taken effect and applies to applications and assessments for Discovery Indigenous 2027 and all scheme rounds opening after that date.

The policy says applicants may use generative AI tools to support tasks such as testing ideas, improving language, and summarising text, but remain responsible for the content they submit and are considered the authors of that content.

Administering organisations are also responsible for ensuring that applications are complete, accurate, and free from false or misleading information, while delegated research leaders must certify that participants are responsible for the authorship and intellectual content of applications and that they have not infringed the intellectual property rights of others.

A notable change in the revised policy is that assessors are now permitted to use generative AI tools in limited ways. The ARC says assessors may use AI only to correct or improve grammar, spelling, formatting, and the readability of drafted assessments.

At the same time, the policy states that assessors must not use AI to help form an opinion on the quality of an application and must preserve the confidentiality of all application materials. Inputting any application material into public generative AI tools such as ChatGPT, Gemini, Claude, or Perplexity is described by the ARC as a serious breach of confidentiality and is not permitted.

The ARC also says assessors will be asked about their use of AI and must be transparent when requested. Where assessors’ inappropriate use of generative AI is suspected, the ARC may remove that assessment from the process. If a breach is established following investigation, the ARC may impose consequential actions in addition to any imposed by the assessor’s employing institution.

The revised policy explains that its approach is shaped by concerns including intellectual integrity and authorship, safeguarding intellectual property, culturally appropriate use of data, content reliability and bias, human oversight and expert judgement, and energy and environmental impacts. It also states that the ARC will continue to monitor developments in generative AI and update the policy as required.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN experts warn of growing risks from digital surveillance and AI misuse

UN human rights experts have raised concerns about the global expansion of digital surveillance technologies and their impact on fundamental freedoms, warning that current practices risk undermining democratic participation and civic space.

In a joint statement, the experts said that surveillance tools are increasingly used in ways that may be incompatible with international human rights standards. They noted that such technologies are often deployed against civil society, journalists, political opposition, and minority groups, contributing to what they described as a ‘chilling effect’ on freedom of expression and dissent.

The experts highlighted the growing use of advanced technologies, including AI, in areas such as law enforcement, counter-terrorism, and border management. They said that, without adequate legal safeguards, these tools can enable large-scale monitoring, predictive profiling, and the amplification of bias, potentially leading to disproportionate targeting of individuals and groups.

According to the statement, digital surveillance systems are part of broader ecosystems that involve collaboration among governments, private companies, and data intermediaries. These interconnected systems can expand state surveillance capabilities and increase the complexity of assessing their impact on human rights.

The experts also pointed to the role of legal frameworks, noting that broadly defined laws on national security, extremism, and cybercrime may contribute to the misuse of surveillance technologies. Such measures, they said, can affect the work of civil society organisations and other actors operating in the public sphere.

To address these challenges, the experts called for stronger safeguards, including clearer limits on surveillance practices, risk-based regulation of AI systems, and improved oversight mechanisms. They emphasised the importance of human rights impact assessments throughout the lifecycle of digital technologies, as well as the need for accountability and access to remedies in cases of harm.

Why does it matter?

The statement also highlighted the importance of data protection, system testing, and validation to reduce risks associated with digital surveillance tools. It called on governments to align national legislation with international human rights standards and ensure independent oversight of surveillance activities.

The experts further suggested that international cooperation may be needed to address cross-border implications, including the potential development of a binding international framework governing digital surveillance technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO and Oxford University launch global AI course for courts

A free online course aimed at preparing judicial systems for the growing role of AI in legal decision-making has been launched, with UNESCO in partnership with the University of Oxford positioned at the centre of the initiative.

AI is already shaping court processes, influencing evidence assessment, and affecting access to justice. Yet, many legal professionals lack structured guidance to evaluate such systems within a rule-of-law framework.

The UNESCO programme introduces a practical, human rights-based approach to AI, combining legal, ethical, and operational perspectives.

Developed with institutions including Oxford’s Saïd Business School and Blavatnik School of Government, the course equips participants with tools to assess algorithmic outputs, manage risks of bias, and maintain judicial independence in increasingly digital court environments.

Central to UNESCO’s initiative is a newly developed AI and Rule of Law Checklist, designed to help courts scrutinise AI systems and their outputs, including use as evidence.

The course also addresses broader concerns, including fairness, transparency, accountability, and the protection of vulnerable groups, reflecting rising global reliance on AI across justice systems.

Supported by the EU, the course is available globally, free of charge, with certification from the University of Oxford. As AI becomes embedded in judicial processes, capacity-building efforts aim to ensure technological adoption strengthens rather than undermines the rule of law.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU pushes Android changes to open AI competition

The European Commission has outlined draft measures requiring Google to improve interoperability on Android as part of ongoing proceedings under the Digital Markets Act. Regulators are focusing on how third-party AI services can interact with hardware and software features controlled by the Android operating system.

The proposed measures are intended to give competing AI services access to key Android features already used by Google’s own AI services, including Gemini. In practice, that could allow rival services to support actions such as sending messages, sharing content, or completing tasks through user-preferred applications rather than being limited by Google’s default ecosystem.

The Commission’s approach could also make it easier for users to activate alternative AI assistants through customised interactions and device-level features, reducing dependence on default system tools. The broader aim is to give third-party providers a more equal opportunity to innovate and compete in the fast-moving market for AI services on mobile devices.

Feedback on the proposed measures is being gathered as part of the Commission’s specification proceedings under the DMA. The consultation forms part of a wider regulatory effort to enforce fair access to core platform features and strengthen digital competition across European markets, including in the AI sector.

Why does it matter?

The move targets one of the most important control points in the digital economy: the operating system layer. Opening Android features to competing AI services could reduce the structural advantage held by Google and shift power towards a more competitive, multi-provider mobile ecosystem. This is an inference based on the Commission’s stated objective of giving third-party AI services access equivalent to that available to Google’s own AI tools.

Greater interoperability under the Digital Markets Act could reshape how AI reaches users, turning smartphones into more open platforms rather than tightly controlled default environments. At the same time, the case also shows how strongly the EU is trying to apply competition law to the next phase of AI distribution, not only to search, app stores, and browsers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

UK backs self-learning AI push to advance scientific discovery

The UK’s Sovereign AI Fund has invested in Ineffable Intelligence, a British startup developing self-learning AI systems designed to generate new knowledge rather than rely solely on existing data. The investment is being made alongside the British Business Bank.

The company is building algorithms intended to improve through interaction with their environment, refining outcomes through iterative experimentation. The approach is aimed at enabling AI systems to identify new patterns and solutions for use in science, engineering, and healthcare.

Led by AI researcher David Silver, known for his work in reinforcement learning, the project reflects a broader shift towards more autonomous and exploratory forms of AI. Support from the Sovereign AI Fund is intended to help the company scale its development from within the UK and strengthen longer-term domestic innovation capacity.

The investment forms part of a wider strategy to strengthen sovereign AI capability in the UK, reduce reliance on external technologies, and reinforce domestic expertise. In that context, infrastructure support and talent development are being positioned as part of a broader effort to support the growth of next-generation AI systems and expand the UK’s role in frontier research.

Why does it matter?

Investment in self-learning AI reflects a broader shift in how advanced AI is being developed, from systems that mainly analyse existing information towards systems intended to generate new insights through exploration and interaction. If those approaches prove effective, they could accelerate discovery in fields where conventional modelling and data-driven methods have clear limits. This is an inference based on the company’s stated aims and the government’s framing of the investment.

More broadly, sovereign investment in advanced AI highlights a growing focus on technological independence and strategic control over critical digital capability. Strengthening domestic capacity could help ensure that future AI innovation is developed within national ecosystems, with implications for economic competitiveness and long-term research direction.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Greece accelerates digital governance with AI enforcement and social media age restrictions

Greece is moving to tighten online child protection and expand AI-based public enforcement as part of a broader digital governance agenda, Digital Governance and Artificial Intelligence Minister Dimitris Papastergiou has said.

Under the plan, social media platforms would, from 2027, be required to block access for users under 15 using age verification systems rather than self-declared age data. However, AI is already being used in road safety enforcement, with smart cameras issuing digital fines through government platforms.

The policy includes tools such as Kids Wallet, built on privacy-preserving verification methods that share only age eligibility. Authorities say the aim is to address risks linked to digital addiction while strengthening protections for minors across online environments.

Alongside these measures, AI is already being deployed in road safety enforcement. Smart cameras are being used to issue digital fines through government platforms, with a nationwide rollout planned to expand monitoring and improve compliance.

These measures form part of a wider effort to digitise public administration, reduce inefficiencies, and strengthen accountability. By embedding technology more deeply into everyday governance, Greece is trying to reshape how citizens interact with the state while also addressing long-standing systemic problems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU advances GPAI framework with focus on forecasting systemic risks

At the third meeting of the Signatory Taskforce, the European Commission advanced discussions on how to strengthen oversight of advanced AI systems through the General-Purpose AI Code of Practice, with a particular focus on risk forecasting and harmful manipulation.

The latest GPAI taskforce meeting focused on improving how providers assess and anticipate systemic risks linked to high-impact AI models. A central proposal would require providers to estimate when future systems may exceed the highest systemic risk tier already reached by any of their existing models, using structured forecasting methods.

The Commission is also considering using aggregate forecasts across the industry to provide a broader view of technological trends, including compute capacity, algorithmic efficiency, and data availability. The aim is to improve visibility into how capabilities may evolve across the sector rather than only at the level of individual providers.

Attention was also directed towards harmful manipulation, which the Code treats as a recognised systemic risk. Discussions focused on how providers should develop realistic scenarios for testing and evaluating model behaviour, including in deployment settings such as chatbot interfaces, third-party applications, and agentic systems.

The initiative reflects a wider EU regulatory approach centred on transparency, accountability, and proactive governance in AI development. Rather than waiting for harms to materialise, the Code of Practice is being used to push providers to identify risks earlier and to adopt more structured safety planning for general-purpose AI models with systemic risk.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UN prepares first Global Dialogue on AI governance ahead of Geneva meeting

The United Nations is advancing preparations for the first Global Dialogue on Artificial Intelligence Governance, set to take place in Geneva on 6–7 July 2026 alongside the AI for Good Summit.

Speaking at a UN Geneva press briefing, Egriselda López, Permanent Representative of El Salvador and co-chair of the Dialogue, said the initiative was established by UN member states as a universal forum to discuss AI governance. The process is intended to bring together governments and stakeholders with the aim of producing tangible outcomes.

López said the initial meeting will be structured around thematic clusters, including one focusing on AI opportunities and implications and another addressing the digital divide. She added that consultations with member states and stakeholders are ongoing to ensure an inclusive format for the discussions.

Rein Tammsaar, Permanent Representative of Estonia and co-chair of the Dialogue, said the forum aims to connect existing AI initiatives and best practices from around the world. He stressed the importance of interoperability and coordination, noting that the Dialogue seeks to create synergies rather than duplicate existing efforts.

According to Tammsaar, additional thematic areas will include interoperability, safety, and human rights. While human rights are expected to be a cross-cutting issue, stakeholders have also called for it to be addressed as a standalone theme.

Amandeep Gill, UN Secretary-General’s Envoy on Technology, described the initiative as part of a broader approach to ensuring that AI benefits humanity as a whole. He said the Dialogue is designed as a ‘dialogue of dialogues’, enabling governments, experts and other stakeholders to exchange knowledge in a rapidly evolving technological environment.

Gill also highlighted the role of the Independent International Scientific Panel on AI, which is expected to present its findings at the Geneva meeting. He noted that global capacity to both use and govern AI remains uneven, underlining the need to address disparities between countries.

Officials emphasised that the Dialogue is intended to complement existing initiatives rather than centralise governance efforts. It will focus on issues such as safety and human rights, while discussions on military uses of AI fall outside its mandate.

A second Global Dialogue on AI Governance meeting is planned for May 2027 in New York, as part of ongoing efforts to develop a more coordinated and inclusive global approach to AI governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Saudi initiative attempts to link AI with sustainability goals

A new AI-enabled sustainability platform developed with support from the World Economic Forum aims to strengthen partnerships across sectors. The initiative is led by Saudi Arabia’s Ministry of Economy and Planning as part of its wider development agenda.

The platform, known as SUSTAIN, uses AI to match organisations with potential partners and opportunities. It is designed to connect government, businesses, academia, and civil society more efficiently and to help move sustainability projects from planning to implementation.

Developers say the system could accelerate collaboration and support the delivery of higher-impact sustainability projects. Official estimates suggest it could help unlock partnerships worth up to $20 billion in Saudi Arabia and significantly more across the wider region.

The initiative forms part of broader efforts to advance long-term sustainability goals through more coordinated action and practical uses of AI. The project is being developed in Saudi Arabia and presented as a tool to strengthen cross-sector cooperation rather than a stand-alone sustainability programme.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Kazakhstan advances digital economy with AI business assistant

Kazakhstan has introduced an AI-powered assistant designed to simplify the process of starting a business, according to Zhaslan Madiyev. Developed in cooperation with the Ministry of Finance, the platform aims to provide data-driven guidance to early-stage entrepreneurs.

Built around a digital mapping system, the assistant evaluates factors such as nearby businesses, customer flow, and competition. Its recommendations aim to help users choose more viable locations and avoid oversaturated sectors, thereby reducing the risk of duplicating businesses in the same area.

Officials say the tool could reduce startup operating costs by up to half while improving long-term business sustainability. Alongside it, a second AI assistant already provides continuous guidance on tax reporting and regulatory compliance, translating complex requirements into clearer, more practical steps for users. According to Kazakhstani reporting, the tax assistant has already processed more than 5,000 requests.

The development forms part of Kazakhstan’s wider digital transformation agenda, which aims to modernise public services and strengthen the country’s digital economy through practical AI deployment. The government says more than 50 AI-powered services are now being developed to support citizens and businesses.

Why does it matter?

Kazakhstan’s AI assistant points to a shift from basic digital services towards more active, real-time decision support for entrepreneurs. Data-driven recommendations can help reduce startup risks, limit market oversaturation, and support more efficient resource allocation across local economies.

Simplified tax and compliance guidance also targets one of the main barriers facing early-stage businesses: administrative complexity. Placed within Kazakhstan’s broader AI-first digital strategy, the initiative signals a wider move towards a more competitive and operationally AI-driven digital economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!