AI cheating crisis leaves teachers in despair

Teachers across the US are growing alarmed by widespread student use of AI for assignments, calling it a crisis that undermines education itself. Some professors report that students now rely on AI for everything from note-taking to essay writing, leaving educators questioning the future of learning.

The fear of false accusations is rising among honest students, with some recording their screens to prove their work is genuine. Detection tools often misfire, further complicating efforts to distinguish real effort from AI assistance.

While some argue for banning tech and returning to traditional classroom methods, others suggest rethinking US education entirely. Rather than fighting AI, some believe it offers a chance to re-engage students by giving them meaningful work they want to do.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK government backs AI to help teachers and reduce admin

The UK government has unveiled new guidance for schools that promotes the use of AI to reduce teacher workloads and increase face-to-face time with pupils.

The Department for Education (DfE) says AI could take over time-consuming administrative tasks such as lesson planning, report writing, and email drafting—allowing educators to focus more on classroom teaching.

The guidance, aimed at schools and colleges in the UK, highlights how AI can assist with formative assessments like quizzes and low-stakes feedback, while stressing that teachers must verify outputs for accuracy and data safety.

It also recommends using only school-approved tools and limits AI use to tasks that support rather than replace teaching expertise.

Education unions welcomed the move but said investment is needed to make it work. Leaders from the NAHT and ASCL praised AI’s potential to ease pressure on staff and help address recruitment issues, but warned that schools require proper infrastructure and training.

The government has pledged £1 million to support AI tool development for marking and feedback.

Education Secretary Bridget Phillipson said the plan will free teachers to deliver more personalised support, adding: ‘We’re putting cutting-edge AI tools into the hands of our brilliant teachers to enhance how our children learn and develop.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s AI tools disabled for gaokao exam

As millions of high school students across China began the rigorous ‘gaokao’ college entrance exam, the country’s leading tech companies took unprecedented action by disabling AI features on their popular platforms.

Apps from Tencent, ByteDance, and Moonshot AI temporarily blocked functionalities like photo recognition and real-time question answering. This move aimed to prevent students from using AI chatbots to cheat during the critical national examination, which largely dictates university admissions in China.

This year, approximately 13.4 million students are participating in the ‘gaokao,’ a multi-day test that serves as a pivotal determinant for social mobility, particularly for those from rural or lower-income backgrounds.

The immense pressure associated with the exam has historically fueled intense test preparation. However, screenshots circulating on Chinese social media app Rednote confirmed that AI chatbots like Tencent’s YuanBao, ByteDance’s Doubao, and Moonshot AI’s Kimi displayed messages indicating the temporary closure of exam-relevant features to ensure fairness.

China’s ‘gaokao’ exam highlights a balanced approach to AI: promoting its education from a young age, with compulsory instruction in Beijing schools this autumn, while firmly asserting it’s for learning, not cheating. Regulators draw a clear line, reinforcing that AI aids development, but never compromises academic integrity.

This coordinated action by major tech firms reinforces the message that AI has no place in the examination hall, despite China’s broader push to cultivate an AI-literate generation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK teams with tech giants on AI training

The UK government is launching a nationwide AI skills initiative aimed at both workers and schoolchildren, with Prime Minister Keir Starmer announcing partnerships with major tech companies including Google, Microsoft and Amazon.

The £187 million TechFirst programme will provide AI education to one million secondary students and train 7.5 million workers over the next five years.

Rather than keeping such tools limited to specialists, the government plans to make AI training accessible across classrooms and businesses. Companies involved will make learning materials freely available to boost digital skills and productivity, particularly in using chatbots and large language models.

Starmer said the scheme is designed to empower the next generation to shape AI’s future instead of being shaped by it. He called it the start of a new era of opportunity and growth, as the UK aims to strengthen its global leadership in AI.

The initiative arrives as the country’s AI sector, currently worth £72 billion, is projected to grow to more than £800 billion by 2035.

The government also signed two agreements with NVIDIA to support a nationwide AI talent pipeline, reinforcing efforts to expand both the workforce and innovation in the sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Schools in the EU start adapting to the AI Act

European schools are taking their first concrete steps to integrate AI in line with the EU AI Act, with educators and experts urging a measured, strategic approach to compliance.

At a recent conference on AI in education, school leaders and policymakers explored how to align AI adoption with the incoming regulations.

With key provisions of the EU AI Act already in effect and full enforcement coming by August 2026, the pressure is on schools to ensure their use of AI is transparent, fair, and accountable. The law classifies AI tools by risk level, with those used to evaluate or monitor students subject to stricter oversight.

Matthew Wemyss, author of ‘AI in Education: An EU AI Act Guide,’ laid out a framework for compliance: assess current AI use, scrutinise the impact on students, and demand clear documentation from vendors.

Wemyss stressed that schools remain responsible as deployers, even when using third-party tools, and should appoint governance leads who understand both technical and ethical aspects.

Education consultant Philippa Wraithmell warned schools not to confuse action with strategy. She advocated starting small, prioritising staff confidence, and ensuring every tool aligns with learning goals, data safety, and teacher readiness.

Al Kingsley MBE emphasised the role of strong governance structures and parental transparency, urging school boards to improve their digital literacy to lead effectively.

The conference highlighted a unifying theme: meaningful AI integration in schools requires intentional leadership, community involvement, and long-term planning. With the right mindset, schools can use AI not just to automate, but to enhance learning outcomes responsibly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI in higher education: A mixed blessing for students and institutions

AI rapidly reshapes university life, offering students new tools to boost creativity, structure assignments, and develop ideas more efficiently. At institutions like Oxford Brookes University, students like 22-year-old Sunjaya Phillips have found that AI enhances confidence and productivity when used responsibly, with faculty guidance.

She describes AI as a ‘study buddy’ that transformed her academic experience, especially during creative blocks, where AI-generated prompts saved valuable time. However, the rise of AI in academia also raises important concerns.

A global student survey revealed that while many embrace AI in their studies, a majority fear its long-term implications on employment. Some admit to misusing the technology for dishonest purposes, highlighting the ethical challenges it presents.

Experts like Dr Charlie Simpson from Oxford Brookes caution that relying too heavily on AI to ‘do the thinking’ undermines educational goals and may devalue the learning process.

Despite these concerns, many educators and institutions remain optimistic about AI’s potential—if used wisely. Professor Keiichi Nakata from Henley Business School stresses that AI is not a replacement but a powerful aid, likening its expected workplace relevance to today’s basic IT skills.

He and others argue that responsible AI use could elevate the capabilities of future graduates and reshape degree expectations accordingly. While some students worry about job displacement, others, like Phillips, view AI as a support system rather than a threat.

The consensus among academics is clear: success in the age of AI will depend not on avoiding the technology, but on mastering it with discernment, ethics, and adaptability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI hits 3 million business subscribers

OpenAI has added another 1 million paying business subscribers since February, bringing the total to 3 million across ChatGPT Enterprise, Team and Edu.

The milestone was shared during a company livestream and confirmed in interviews with outlets like CNBC.

Chief Operating Officer Brad Lightcap noted that the business tools are being adopted widely, even in regulated sectors like finance and healthcare.

He said growth among individual users has fuelled enterprise adoption instead of stalling it, highlighting a feedback loop between consumer and business uptake.

OpenAI launched ChatGPT Enterprise in August 2023, followed by Team in January 2024 and Edu in May 2024. Within a year of its first business product, the firm had already reached 1 million paying business users—a number that has now tripled.

Lightcap said AI is reshaping work across sectors—from student learning to patient care and public services—by increasing productivity instead of just automating tasks.

A separate PYMNTS Intelligence report found that 82% of workers using generative AI weekly believe it improves their output. OpenAI’s overall user base has reportedly reached 800 million people, with CEO Sam Altman claiming 10% of the global population now uses the company’s tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ACAI and Universal AI University partner to boost AI innovation in Qatar

The Arab Centre for Artificial Intelligence (ACAI) and India’s Universal AI University (UAI) have partnered through a Memorandum of Understanding (MoU) to accelerate the advancement of AI across Qatar and the broader region. That collaboration aims to enhance education, research, and innovation in AI and emerging technologies.

Together, ACAI and UAI plan to establish a specialised AI research centre and develop advanced training programs to cultivate local expertise. They will also launch various online and short-term educational courses designed to address the growing demand for skilled AI professionals in Qatar’s job market, ensuring that the workforce is well-prepared for future technological developments.

Looking forward, the partnership envisions creating a dedicated AI-focused university campus. The initiative aligns with Qatar’s vision to transition into a knowledge-based economy by fostering innovation and offering academic programs in AI, engineering, business administration, environmental sustainability, and other emerging technologies.

The MoU is valid for ten years and includes provisions for dispute resolution, intellectual property rights management, and annual reviews to ensure tangible and sustainable outcomes. Further detailed implementation agreements are expected to formalise the partnership’s operational aspects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI to disrupt jobs, warns DeepMind CEO, as Gen Alpha faces new realities

AI will likely cause significant job disruption in the next five years, according to Demis Hassabis, CEO of Google DeepMind. Speaking on the Hard Fork podcast, Hassabis emphasised that while AI is set to displace specific jobs, it will also create new roles that are potentially more meaningful and engaging.

He urged younger generations to prepare for a rapidly evolving workforce shaped by advanced technologies. Hassabis stressed the importance of early adaptation, particularly for Generation Alpha, who he believes should embrace AI just as millennials did the internet and Gen Z did smartphones.

Hassabis also called on students to become ‘ninjas with AI,’ encouraging them to understand how these tools work and master them for future success. While he highlighted the potential of generative AI, such as Google’s new Veo 3 video generator unveiled at I/O 2025, Hassabis also reminded listeners that a solid foundation in STEM remains vital.

He noted that soft skills like creativity, resilience, and adaptability are equally essential—traits that will help young people thrive in a future defined by constant technological change. As AI becomes more deeply embedded in industries from education to entertainment, Hassabis’ message is clear – the next generation must balance technical knowledge with human ingenuity to stay ahead in tomorrow’s job market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

West Lothian schools hit by ransomware attack

West Lothian Council has confirmed that personal and sensitive information was stolen following a ransomware cyberattack which struck the region’s education system on Tuesday, 6 May. Police Scotland has launched an investigation, and the matter remains an active criminal case.

Only a small fraction of the data held on the education network was accessed by the attackers. However, some of it included sensitive personal information. Parents and carers across West Lothian’s schools have been notified, and staff have also been advised to take extra precautions.

The cyberattack disrupted IT systems serving 13 secondary schools, 69 primary schools and 61 nurseries. Although the education network remains isolated from the rest of the council’s systems, contingency plans have been effective in minimising disruption, including during the ongoing SQA exams.

West Lothian Council has apologised to anyone potentially affected. It is continuing to work closely with Police Scotland and the Scottish Government. Officials have promised further updates as more information becomes available.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!