OpenAI is set to introduce an education-focused version of its chatbot to around 500,000 students and faculty at California State University. The rollout, covering 23 campuses, aims to provide personalised tutoring for students and administrative support for faculty members. The initiative is part of OpenAI’s broader effort to integrate its technology into education despite initial concerns about cheating and plagiarism.
Universities such as the Wharton School, the University of Texas at Austin, and the University of Oxford have already adopted ChatGPT Enterprise. In response, OpenAI launched ChatGPT Edu in May last year to cater specifically to academic institutions. The education sector has become a growing focus for AI companies, with Alphabet investing $120 million into AI education programs and preparing to introduce its Gemini chatbot into school-issued Google accounts for teenage students.
Competition in AI-driven education is intensifying. In the UK, Prime Minister Keir Starmer inaugurated the first Google-funded AI university in London, providing teens with AI and machine learning resources. As AI adoption in schools increases, major tech companies are vying for a dominant role in shaping the future of digital learning.
Young people in Guernsey are being offered a free six-week course on AI to help them understand both the opportunities and challenges of the technology. Run by Digital Greenhouse in St Peter Port, the programme is open to students and graduates over the age of 16, regardless of their academic background. Experts from University College London (UCL) deliver the lessons remotely each week.
Jenny de la Mare from Digital Greenhouse said the course was designed to “inform and inspire” participants while helping them stand out in job and university applications. She emphasised that the programme was not limited to STEM students and could serve as a strong introduction to AI for anyone interested in the field.
Recognising that young people in Guernsey may have fewer opportunities to attend major tech events in the UK, organisers hope the course will give them a competitive edge. The programme has already started but is still open for registrations, with interested individuals encouraged to contact Digital Greenhouse.
AI-powered study rooms are revolutionising online education in China by offering personalised, tech-driven learning experiences. These spaces cater to students aged 8 to 18, using advanced software to provide interactive lessons and real-time feedback. The AI systems analyse mistakes, adjust course materials, and generate detailed progress reports for parents, who can track their child’s improvement remotely. By leveraging technology, these study rooms aim to make education more engaging and tailored to individual learning needs.
These AI rooms are marketed as self-study spaces rather than traditional tutoring centres, allowing them to navigate China’s strict private tutoring regulations by framing their services as facility rentals or membership plans. This creative positioning allows them to operate within a regulatory grey area, avoiding restrictions on off-campus tutoring for students in grades one through nine. Membership fees range from 1,000 to 3,000 yuan monthly, making them a more affordable long-term alternative to expensive one-on-one tutoring sessions.
Despite their growing popularity, education experts remain sceptical of their educational value. Critics argue that many of these systems lack proper AI functionality, relying instead on preloaded prompts and automated responses. Furthermore, there are concerns that their heavy emphasis on drilling questions to improve test scores may neglect critical thinking and deeper comprehension. However, proponents believe these AI-powered study rooms represent an essential step toward integrating technology into education and expanding access to personalised learning.
A new poll by the Allensbach Institute reveals that Germans who rely on TikTok for news are less likely to view China as a dictatorship, criticise Russia’s invasion of Ukraine, or trust vaccines compared to consumers of traditional media. The findings suggest that the platform’s information ecosystem could contribute to scepticism about widely accepted narratives and amplify conspiracy theories. Among surveyed groups, TikTok users exhibited levels of distrust in line with users of X, formerly Twitter.
The study, commissioned by a foundation affiliated with Germany’s Free Democrats, comes amid ongoing US debates over the potential national security risks posed by the Chinese-owned app. The research highlights how young Germans, who make up TikTok’s largest user base, are more inclined to support the far-right Alternative for Germany (AfD) party, which has surged in popularity ahead of Germany’s upcoming election. By contrast, consumers of traditional media were significantly more supportive of Ukraine and critical of Russian aggression.
Concerns about misinformation on platforms like TikTok are echoed by researchers, who warn that foreign powers, particularly Russia, exploit social media to influence public opinion. The poll found that while 57% of newspaper readers believed China to be a dictatorship, only 28.1% of TikTok users shared the same view. Additionally, TikTok users were less likely to believe that China and Russia disseminate false information, while being more suspicious of their own government. Calls for action to address misinformation underscore the platform’s potential impact on younger, more impressionable audiences.
Younger members of Generation Z are turning to ChatGPT for schoolwork, with a new Pew Research Centre survey revealing that 26% of US teens aged 13 to 17 have used the AI-powered chatbot for homework. This figure has doubled since 2023, highlighting the growing reliance on AI tools in education. The survey also showed mixed views among teens about its use, with 54% finding it acceptable for research, while smaller proportions endorsed its use for solving maths problems (29%) or writing essays (18%).
Experts have raised concerns about the limitations of ChatGPT in academic contexts. Studies indicate the chatbot struggles with accuracy in maths and certain subject areas, such as social mobility and African geopolitics. Research also shows varying impacts on learning outcomes, with Turkish students who used ChatGPT performing worse on a maths test than peers who didn’t. German students, while finding research materials more easily, synthesised information less effectively when using the tool.
Educators remain cautious about integrating AI into classrooms. A quarter of public K-12 teachers surveyed by Pew believed AI tools like ChatGPT caused more harm than good in education. Another study by the Rand Corporation found only 18% of K-12 teachers actively use AI in their teaching practices. The disparities in effectiveness and the tool’s limitations underscore the need for careful consideration of its role in learning environments.
Hull College has embraced AI to enhance learning, from lesson planning to real-time language translation. The institution is hosting a conference at its Queens Gardens campus to discuss how AI is influencing teaching, learning, and career preparation.
Mature student Sharron Knight, retraining to become a police call handler, attended an AI seminar and described the technology as ‘not as scary’ as she initially thought. She expressed surprise at the vast possibilities it offers. Student Albara Tahir, whose first language is Sudanese, has also benefited from AI tools, using them to improve his English skills.
Hull College principal Debra Gray highlighted AI’s potential to empower educators. She compared the tool to a bicycle, helping both teachers and students reach their goals faster without altering the core learning process.
The UK government recently announced plans to expand AI’s role in public services and economic growth, including creating ‘AI Growth Zones’ to support job creation and infrastructure projects. AI is already being used in UK hospitals for cancer diagnostics and other critical tasks.
Ian Russell, father of Molly Russell, has called on the UK government to take stronger action on online safety, warning that delays in regulation are putting children at risk. In a letter to Prime Minister Sir Keir Starmer, he criticised Ofcom’s approach to enforcing the Online Safety Act, describing it as a “disaster.” Russell accused tech firms, including Meta and X, of prioritising profits over safety and moving towards a more dangerous, unregulated online environment.
Campaigners argue that Ofcom’s guidelines contain major loopholes, particularly in addressing harmful content such as live-streamed material that promotes self-harm and suicide. While the government insists that tech companies must act responsibly, the slow progress of new regulations has raised concerns. Ministers acknowledge that additional legislation may be required as AI technology evolves, introducing new risks that could further undermine online safety.
Russell has been a prominent campaigner for stricter online regulations since his daughter’s death in 2017. Despite the Online Safety Act granting Ofcom the power to fine tech firms, critics believe enforcement remains weak. With concerns growing over the effectiveness of current safeguards, pressure is mounting on the government to act decisively and ensure platforms take greater responsibility in protecting children from harmful content.
Education technology provider PowerSchool has suffered a major data breach, exposing the personal information of millions of students and teachers. Hackers gained access to its systems by exploiting stolen credentials, using a tool within the company’s PowerSource support portal to export sensitive data.
The stolen records include names, addresses, and potentially more sensitive details such as Social Security numbers and medical information in the US and Canada. PowerSchool, which manages academic records for over 60 million K-12 students, assured customers that not all users were affected. However, the breach has left schools scrambling to assess the damage.
PowerSchool insists the hack wasn’t due to a flaw in its software but was a result of unauthorised access using legitimate credentials. The company has engaged cybersecurity experts to investigate and taken steps to improve security, including deactivating compromised accounts and strengthening password controls.
Critics argue that PowerSchool was slow to inform customers, potentially putting students, parents, and educators at greater risk of identity theft. While PowerSchool is offering affected users credit monitoring and identity protection services, the incident has sparked calls for stricter regulations on data security in the education sector.
As concerns grow over the impact of smartphones on children, several European countries are implementing or debating restrictions on their use in schools. France, for example, has prohibited phones in primary and secondary schools since 2018 and recently extended the policy to include ‘digital breaks’ at some institutions. Similarly, the Netherlands and Hungary have adopted bans, with exceptions for educational purposes or special needs, while Italy, Greece, and Latvia have also imposed restrictions.
The debate is fueled by studies showing that smartphones can distract students, though some argue they can also be useful for learning. A 2023 UNESCO report recommended limiting phones in schools to support education, with more than 60 countries now following similar measures. However, enforcement remains a challenge, as some reports suggest that many students still find ways to use their devices despite the bans.
Experts remain divided on the issue. While some highlight the risks of distraction and mental health impacts, others emphasise the need for balance. ‘Banning phones can be beneficial, but we must ensure children have adequate alternatives for education and communication,’ said Ben Carter, a professor of medical statistics at King’s College London.
The trend reflects broader concerns about screen time among children, with countries like Sweden and Luxembourg calling for clearer rules to promote healthier digital habits. While opinions differ, the growing movement underscores a collective effort to create focused, engaging, and healthier learning environments.
AI is transforming education for students with disabilities, offering tools that level the playing field. From reading assistance to speech and language tools, AI is enabling students to overcome learning barriers. For 14-year-old Makenzie Gilkison, who has dyslexia, AI-powered assistive technology has been life-changing, allowing her to excel academically and keep pace with her peers.
Schools are increasingly adopting AI for personalised learning, balancing its benefits with ethical considerations. Tools like chatbots and text-to-speech programs enhance accessibility while raising concerns about over-reliance and the potential for misuse. Experts emphasise that AI should support, not replace, learning.
Research and development are advancing rapidly, addressing challenges like children’s handwriting and speech impediments. Initiatives such as the National AI Institute for Exceptional Education aim to refine these tools, while educators work to ensure students and teachers are equipped to harness their potential effectively.