Survey links TikTok news consumption to scepticism on major global issues

A new poll by the Allensbach Institute reveals that Germans who rely on TikTok for news are less likely to view China as a dictatorship, criticise Russia’s invasion of Ukraine, or trust vaccines compared to consumers of traditional media. The findings suggest that the platform’s information ecosystem could contribute to scepticism about widely accepted narratives and amplify conspiracy theories. Among surveyed groups, TikTok users exhibited levels of distrust in line with users of X, formerly Twitter.

The study, commissioned by a foundation affiliated with Germany’s Free Democrats, comes amid ongoing US debates over the potential national security risks posed by the Chinese-owned app. The research highlights how young Germans, who make up TikTok’s largest user base, are more inclined to support the far-right Alternative for Germany (AfD) party, which has surged in popularity ahead of Germany’s upcoming election. By contrast, consumers of traditional media were significantly more supportive of Ukraine and critical of Russian aggression.

Concerns about misinformation on platforms like TikTok are echoed by researchers, who warn that foreign powers, particularly Russia, exploit social media to influence public opinion. The poll found that while 57% of newspaper readers believed China to be a dictatorship, only 28.1% of TikTok users shared the same view. Additionally, TikTok users were less likely to believe that China and Russia disseminate false information, while being more suspicious of their own government. Calls for action to address misinformation underscore the platform’s potential impact on younger, more impressionable audiences.

ChatGPT usage in schools doubles among US teens

Younger members of Generation Z are turning to ChatGPT for schoolwork, with a new Pew Research Centre survey revealing that 26% of US teens aged 13 to 17 have used the AI-powered chatbot for homework. This figure has doubled since 2023, highlighting the growing reliance on AI tools in education. The survey also showed mixed views among teens about its use, with 54% finding it acceptable for research, while smaller proportions endorsed its use for solving maths problems (29%) or writing essays (18%).

Experts have raised concerns about the limitations of ChatGPT in academic contexts. Studies indicate the chatbot struggles with accuracy in maths and certain subject areas, such as social mobility and African geopolitics. Research also shows varying impacts on learning outcomes, with Turkish students who used ChatGPT performing worse on a maths test than peers who didn’t. German students, while finding research materials more easily, synthesised information less effectively when using the tool.

Educators remain cautious about integrating AI into classrooms. A quarter of public K-12 teachers surveyed by Pew believed AI tools like ChatGPT caused more harm than good in education. Another study by the Rand Corporation found only 18% of K-12 teachers actively use AI in their teaching practices. The disparities in effectiveness and the tool’s limitations underscore the need for careful consideration of its role in learning environments.

AI helps Hull students overcome language barriers

Hull College has embraced AI to enhance learning, from lesson planning to real-time language translation. The institution is hosting a conference at its Queens Gardens campus to discuss how AI is influencing teaching, learning, and career preparation.

Mature student Sharron Knight, retraining to become a police call handler, attended an AI seminar and described the technology as ‘not as scary’ as she initially thought. She expressed surprise at the vast possibilities it offers. Student Albara Tahir, whose first language is Sudanese, has also benefited from AI tools, using them to improve his English skills.

Hull College principal Debra Gray highlighted AI’s potential to empower educators. She compared the tool to a bicycle, helping both teachers and students reach their goals faster without altering the core learning process.

The UK government recently announced plans to expand AI’s role in public services and economic growth, including creating ‘AI Growth Zones’ to support job creation and infrastructure projects. AI is already being used in UK hospitals for cancer diagnostics and other critical tasks.

Father of Molly Russell urges UK to strengthen online safety laws

Ian Russell, father of Molly Russell, has called on the UK government to take stronger action on online safety, warning that delays in regulation are putting children at risk. In a letter to Prime Minister Sir Keir Starmer, he criticised Ofcom’s approach to enforcing the Online Safety Act, describing it as a “disaster.” Russell accused tech firms, including Meta and X, of prioritising profits over safety and moving towards a more dangerous, unregulated online environment.

Campaigners argue that Ofcom’s guidelines contain major loopholes, particularly in addressing harmful content such as live-streamed material that promotes self-harm and suicide. While the government insists that tech companies must act responsibly, the slow progress of new regulations has raised concerns. Ministers acknowledge that additional legislation may be required as AI technology evolves, introducing new risks that could further undermine online safety.

Russell has been a prominent campaigner for stricter online regulations since his daughter’s death in 2017. Despite the Online Safety Act granting Ofcom the power to fine tech firms, critics believe enforcement remains weak. With concerns growing over the effectiveness of current safeguards, pressure is mounting on the government to act decisively and ensure platforms take greater responsibility in protecting children from harmful content.

Education giant PowerSchool hit by major data leak

Education technology provider PowerSchool has suffered a major data breach, exposing the personal information of millions of students and teachers. Hackers gained access to its systems by exploiting stolen credentials, using a tool within the company’s PowerSource support portal to export sensitive data.

The stolen records include names, addresses, and potentially more sensitive details such as Social Security numbers and medical information in the US and Canada. PowerSchool, which manages academic records for over 60 million K-12 students, assured customers that not all users were affected. However, the breach has left schools scrambling to assess the damage.

PowerSchool insists the hack wasn’t due to a flaw in its software but was a result of unauthorised access using legitimate credentials. The company has engaged cybersecurity experts to investigate and taken steps to improve security, including deactivating compromised accounts and strengthening password controls.

Critics argue that PowerSchool was slow to inform customers, potentially putting students, parents, and educators at greater risk of identity theft. While PowerSchool is offering affected users credit monitoring and identity protection services, the incident has sparked calls for stricter regulations on data security in the education sector.

European nations debate school smartphone bans

As concerns grow over the impact of smartphones on children, several European countries are implementing or debating restrictions on their use in schools. France, for example, has prohibited phones in primary and secondary schools since 2018 and recently extended the policy to include ‘digital breaks’ at some institutions. Similarly, the Netherlands and Hungary have adopted bans, with exceptions for educational purposes or special needs, while Italy, Greece, and Latvia have also imposed restrictions.

The debate is fueled by studies showing that smartphones can distract students, though some argue they can also be useful for learning. A 2023 UNESCO report recommended limiting phones in schools to support education, with more than 60 countries now following similar measures. However, enforcement remains a challenge, as some reports suggest that many students still find ways to use their devices despite the bans.

Experts remain divided on the issue. While some highlight the risks of distraction and mental health impacts, others emphasise the need for balance. ‘Banning phones can be beneficial, but we must ensure children have adequate alternatives for education and communication,’ said Ben Carter, a professor of medical statistics at King’s College London.

The trend reflects broader concerns about screen time among children, with countries like Sweden and Luxembourg calling for clearer rules to promote healthier digital habits. While opinions differ, the growing movement underscores a collective effort to create focused, engaging, and healthier learning environments.

Schools embrace AI to improve accessibility

AI is transforming education for students with disabilities, offering tools that level the playing field. From reading assistance to speech and language tools, AI is enabling students to overcome learning barriers. For 14-year-old Makenzie Gilkison, who has dyslexia, AI-powered assistive technology has been life-changing, allowing her to excel academically and keep pace with her peers.

Schools are increasingly adopting AI for personalised learning, balancing its benefits with ethical considerations. Tools like chatbots and text-to-speech programs enhance accessibility while raising concerns about over-reliance and the potential for misuse. Experts emphasise that AI should support, not replace, learning.

Research and development are advancing rapidly, addressing challenges like children’s handwriting and speech impediments. Initiatives such as the National AI Institute for Exceptional Education aim to refine these tools, while educators work to ensure students and teachers are equipped to harness their potential effectively.

MCU and Fortinet to enhance cybersecurity education in the Philippines

Manila Central University (MCU) has partnered with Fortinet, a global leader in cybersecurity, through its Academic Partner Program to address the growing talent shortage in the Philippines. That collaboration aims to equip students with essential skills to meet industry demands by integrating Fortinet’s Network Security Expert (NSE) training and certification program into the university’s curriculum, either as coursework or standalone offerings.

Faculty members will receive advanced training, and students will benefit from guest lectures, practical exercises, and hands-on learning in areas like network security, malware analysis, and defence strategies. Additionally, the partnership includes establishing a state-of-the-art Cyber Innovation Lab to provide immersive learning experiences.

The initiative aligns with findings from Fortinet’s ‘Cybersecurity Skills Gap 2024 Global Research Report,’ which revealed that 94% of organisations in the Philippines experienced security breaches in 2023, with 77% partly attributed to a lack of cybersecurity skills. MCU joins nine other institutions, including Mapúa University and Mindanao State University-Sulu, in Fortinet’s nationwide effort to strengthen cybersecurity education.

The partnership also represents a significant step toward bridging the cybersecurity skills gap in the Philippines. By combining Fortinet’s expertise with MCU’s academic foundation, the program offers students industry-recognised certifications and practical knowledge needed to excel as cybersecurity professionals.

Why does it matter?

The initiative addresses immediate challenges highlighted in the report and strengthens the country’s capacity to defend against evolving digital threats, ensuring a robust pipeline of future professionals ready to meet global cybersecurity standards.

UCLA to offer AI-developed humanities course

UCLA is breaking new ground with an AI-developed comparative literature course set to launch in winter 2025. The class, covering literature from the Middle Ages to the 17th century, will feature a textbook, assignments, and teaching assistant (TA) resources generated by Kudu, an AI-powered platform founded by UCLA physics professor Alexander Kusenko. This initiative marks the first use of AI-generated materials in UCLA’s humanities division.

Professor Zrinka Stahuljak, who designed the course, collaborated with Kudu by providing lecture notes, PowerPoint slides, and videos from previous classes. The AI system produced the materials within three to four months, requiring just 20 hours of professor involvement. Kudu’s platform allows students to interact with course content through questions answered strictly within the provided material, ensuring focused and accurate responses.

By streamlining material creation, the approach frees up professors and TAs to engage more closely with students while maintaining consistency in course delivery. UCLA hopes this innovative method will enhance the learning experience and redefine education in the humanities.

OpenAI explores AI tools to transform education

OpenAI is working to integrate AI into e-learning through customisable GPT tools, potentially revolutionising how students interact with academic content. According to Siya Raj Purohit of OpenAI‘s education team, professors are already using AI to create tailored course models, allowing students to engage with focused material. These tools could become staples in education, enabling personalised, lifelong learning.

The initiative complements OpenAI’s broader push into education, marked by the launch of ChatGPT Edu for universities and the hiring of former Coursera executive Leah Belsky. Despite these efforts, challenges remain as many educators express reservations about AI’s role in teaching. Tools like Khanmigo, developed with OpenAI, demonstrate AI’s potential but also reveal its current limitations, including accuracy issues.

With the education AI market expected to reach $88.2 billion, OpenAI is committed to refining its tools and addressing educators’ concerns to drive adoption in this burgeoning sector.