England’s Ofqual advises awarding bodies on AI risks in qualifications

New Ofqual guidance says awarding organisations should review assessment design and controls in response to AI-related malpractice risks.

Ofqual GOV.UK graphic illustrating advice on AI-related malpractice for awarding organisations in England

Ofqual, the regulator responsible for qualifications, exams, and assessments in England, has issued an advice note to help awarding organisations assess and manage the risks of AI-related malpractice.

The note explains how existing Conditions of Recognition and related Guidance apply where learners use AI tools in ways that could undermine assessment validity. It does not create new regulatory requirements, but is intended to help awarding organisations understand how current expectations apply in this context.

The risks, Ofqual notes, will vary depending on the qualification and assessment design. Relevant factors include who sets the task, how specific it is, the type of output being assessed, the length and timing of the assessment, the level of supervision, access to digital devices and internet connectivity, and differences in delivery across centres.

The advice also points to wider contextual factors, including the stakes attached to an assessment, its weighting within a qualification, and norms around technology use in particular subject areas. Awarding organisations are advised to consider whether changes introduced to reduce vulnerability to AI-related malpractice could, in turn, affect the construct being assessed or assessment validity more broadly.

The note states that awarding organisations must consider the reasonable steps needed to prevent malpractice and manage its effects, with measures proportionate to the identified risks. Possible responses include adapting assessment design, clarifying acceptable and unacceptable uses of AI, introducing supervision or controls on digital access, and requiring authenticity declarations from learners.

Ofqual also advises awarding organisations to review how they detect and investigate suspected malpractice. Statistical or technological tools may support that process, but should be treated as sources of evidence rather than sole determinants, given the risks of false positives and false negatives.

The advice also notes that some qualifications may legitimately require the use of AI as part of the construct being assessed. In such cases, awarding organisations should set clear parameters for how AI may be used and how that use should be demonstrated or referenced.

Ofqual says awarding organisations should keep their arrangements under review as AI tools and patterns of learner use continue to evolve, and should use any cases of malpractice or maladministration to identify weaknesses and prevent recurrence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!