Generative AI policy updated by Australian Research Council
A revised ARC policy clarifies how generative AI may be used in grant applications and assessments.
The Australian Research Council has updated its policy on the use of generative AI in its grants programmes, setting out how the rules apply to applicants, administering organisations, and assessors in the National Competitive Grants Program.
The revised policy has officially taken effect and applies to applications and assessments for Discovery Indigenous 2027 and all scheme rounds opening after that date.
The policy says applicants may use generative AI tools to support tasks such as testing ideas, improving language, and summarising text, but remain responsible for the content they submit and are considered the authors of that content.
Administering organisations are also responsible for ensuring that applications are complete, accurate, and free from false or misleading information, while delegated research leaders must certify that participants are responsible for the authorship and intellectual content of applications and that they have not infringed the intellectual property rights of others.
A notable change in the revised policy is that assessors are now permitted to use generative AI tools in limited ways. The ARC says assessors may use AI only to correct or improve grammar, spelling, formatting, and the readability of drafted assessments.
At the same time, the policy states that assessors must not use AI to help form an opinion on the quality of an application and must preserve the confidentiality of all application materials. Inputting any application material into public generative AI tools such as ChatGPT, Gemini, Claude, or Perplexity is described by the ARC as a serious breach of confidentiality and is not permitted.
The ARC also says assessors will be asked about their use of AI and must be transparent when requested. Where assessors’ inappropriate use of generative AI is suspected, the ARC may remove that assessment from the process. If a breach is established following investigation, the ARC may impose consequential actions in addition to any imposed by the assessor’s employing institution.
The revised policy explains that its approach is shaped by concerns including intellectual integrity and authorship, safeguarding intellectual property, culturally appropriate use of data, content reliability and bias, human oversight and expert judgement, and energy and environmental impacts. It also states that the ARC will continue to monitor developments in generative AI and update the policy as required.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
