Teachers get AI support for marking and admin

According to new government guidance, teachers in England are now officially encouraged to use AI to reduce administrative tasks. The Department for Education has released training materials that support the use of AI for low-stakes marking and routine parent communication.

The guidance allows AI-generated letters, such as those informing parents about minor issues like head lice outbreaks, and suggests using the technology for quizzes or homework marking.

While the move aims to cut workloads and improve classroom focus, schools are also advised to implement clear policies on appropriate use and ensure manual checks remain in place.

Experts have welcomed the guidance as a step forward but noted concerns about data privacy, budget constraints, and potential misuse.

The guidance comes as UK nations explore AI in education, with Northern Ireland commissioning a study on its impact and Scotland and Wales also advocating its responsible use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK regulator probes 4chan over online safety rules

The UK communications regulator Ofcom has launched an investigation into the controversial message board 4chan for potentially breaching new online safety laws. Under the Online Safety Act, platforms must assess and manage risks related to illegal content affecting UK users.

Ofcom stated that it requested 4chan’s risk assessment in April but received no response, prompting a formal inquiry into whether the site failed to meet its duty to protect users. The nature of the illegal content being scrutinised has not been disclosed.

The regulator emphasised that it has the authority to fine companies up to £18 million or 10% of their global revenue, depending on which is higher. That move marks a significant test of the UK’s stricter regulatory powers to hold online services accountable.

The watchdog’s concerns stem from user anonymity on 4chan, which has historically made the platform a hotspot for controversial, offensive, and often extreme content. A recent cyberattack further complicated matters, rendering parts of the website offline for over a week.

Alongside 4chan, Ofcom is also investigating pornographic site First Time Videos for failing to prove robust age verification systems are in place to block access by under-18s. This is part of a broader crackdown as platforms with age-restricted content face a July deadline to implement effective safeguards, which may include facial age-estimation technology.

Additionally, seven lesser-known file-sharing services, including Krakenfiles and Yolobit, are being scrutinised for potentially hosting child sexual abuse material. Like 4chan, these platforms reportedly failed to respond to Ofcom’s information requests. The regulator’s growing list of investigations signals a tougher era for digital platforms operating in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NHS launches urgent blood donor appeal

The NHS is appealing for one million new blood donors during National Blood Week, following a significant cyberattack on London hospitals that severely impacted blood stocks.

This urgent plea comes after an analysis by NHS Blood and Transplant revealed a shortfall of 200,000 donors, with only two percent of the population currently sustaining the nation’s blood supply.

The ongoing shortage stems from a July 2024 cyberattack, linked to the Russian-based Qilin ransomware group, which crippled the networks of Synnovis, a major NHS lab partner.

This disruption affected crucial pathology services at hospitals including King’s College Hospital and Guy’s and St Thomas’ NHS Foundation Trust, leading to postponed operations and procedures. The incident prompted an Amber alert for severe blood shortages, declared a ‘critical incident’ by the NHS.

With blood stocks remaining low, exacerbated by recent bank holidays, the NHS is now facing a pressing need to prevent a ‘Red Alert,’ which would signify demand far exceeding capacity and threaten public safety.

A particular emphasis is being placed on finding more O-negative and Ro donors, urging the public to come forward and help replenish vital supplies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s AI tools disabled for gaokao exam

As millions of high school students across China began the rigorous ‘gaokao’ college entrance exam, the country’s leading tech companies took unprecedented action by disabling AI features on their popular platforms.

Apps from Tencent, ByteDance, and Moonshot AI temporarily blocked functionalities like photo recognition and real-time question answering. This move aimed to prevent students from using AI chatbots to cheat during the critical national examination, which largely dictates university admissions in China.

This year, approximately 13.4 million students are participating in the ‘gaokao,’ a multi-day test that serves as a pivotal determinant for social mobility, particularly for those from rural or lower-income backgrounds.

The immense pressure associated with the exam has historically fueled intense test preparation. However, screenshots circulating on Chinese social media app Rednote confirmed that AI chatbots like Tencent’s YuanBao, ByteDance’s Doubao, and Moonshot AI’s Kimi displayed messages indicating the temporary closure of exam-relevant features to ensure fairness.

China’s ‘gaokao’ exam highlights a balanced approach to AI: promoting its education from a young age, with compulsory instruction in Beijing schools this autumn, while firmly asserting it’s for learning, not cheating. Regulators draw a clear line, reinforcing that AI aids development, but never compromises academic integrity.

This coordinated action by major tech firms reinforces the message that AI has no place in the examination hall, despite China’s broader push to cultivate an AI-literate generation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Growing push in Europe to regulate children’s social media use

Several European countries, led by Denmark, France, and Greece, are intensifying efforts to shield children from the potentially harmful effects of social media. With Denmark taking over the EU Council presidency from July, its Digital Minister, Caroline Stage Olsen, has made clear that her country will push for a ban on social media for children under 15.

Olsen criticises current platforms for failing to remove illegal content and relying on addictive features that encourage prolonged use. She also warned that platforms prioritise profit and data harvesting over the well-being of young users.

That initiative builds on growing concern across the EU about the mental and physical toll social media may take on children, including the spread of dangerous content, disinformation, cyberbullying, and unrealistic body image standards. France, for instance, has already passed legislation requiring parental consent for users under 15 and is pressing platforms to verify users’ ages more rigorously.

While the European Commission has issued draft guidelines to improve online safety for minors, such as making children’s accounts private by default, some countries are calling for tougher enforcement under the EU’s Digital Services Act. Despite these moves, there is currently no consensus across the EU for an outright ban.

Cultural differences and practical hurdles, like implementing consistent age verification, remain significant challenges. Still, proposals are underway to introduce a unified age of digital adulthood and a continent-wide age verification application, possibly even embedded into devices, to limit access by minors.

Olsen and her allies remain adamant, planning to dedicate the October summit of the EU digital ministers entirely to the issue of child online safety. They are also looking to future legislation, like the Digital Fairness Act, to enforce stricter consumer protection standards that explicitly account for minors. Meanwhile, age verification and parental controls are seen as crucial first steps toward limiting children’s exposure to addictive and damaging online environments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Apple reveals new AI features at WWDC

Apple has unveiled a range of AI features at its annual Worldwide Developers Conference, focusing on tighter privacy, enhanced user tools and broader integration with OpenAI’s ChatGPT. These updates will appear across iOS 26, iPadOS 26, macOS 26 and visionOS 26, set to launch in autumn.

While Apple Intelligence was first teased last year, the company now allows third-party developers to access its on-device AI models for the first time.

CEO Tim Cook and software chief Craig Federighi outlined how these features are intended to offer more personalised, efficient apps. Users of newer iPhones will benefit from tools such as live translation in Messages and FaceTime, and AI-powered image analysis via Visual Intelligence.

Apple also enables users to blend emojis creatively and use ChatGPT through its Image Playground to stylise photos. Enhancements to the Wallet app will help summarise order tracking from emails, and AI-generated voices will offer fitness updates.

Despite these innovations, Apple’s redesign of Siri remains incomplete and is not expected to launch soon.

The event failed to deliver major surprises, as many details had already been leaked. Investors responded cautiously, sending Apple shares down by 1.2%. The firm has lost 20% of its value in the year and no longer holds the top spot as the world’s most valuable company.

Nonetheless, Apple is expected to reveal more AI advancements in 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybersecurity alarm after 184 million credentials exposed

A vast unprotected database containing over 184 million credentials from major platforms and sectors has highlighted severe weaknesses in data security worldwide.

The leaked credentials, harvested by infostealer malware and stored in plain text, pose significant risks to consumers and businesses, underscoring an urgent need for stronger cybersecurity and better data governance.

Cybersecurity researcher Jeremiah Fowler discovered the 47 GB database exposing emails, passwords, and authorisation URLs from tech giants like Google, Microsoft, Apple, Facebook, and Snapchat, as well as banking, healthcare, and government accounts.

The data was left accessible without any encryption or authentication, making it vulnerable to anyone with the link.

The credentials were reportedly collected by infostealer malware such as Lumma Stealer, which silently steals sensitive information from infected devices. The stolen data fuels a thriving underground economy involving identity theft, fraud, and ransomware.

The breach’s scope extends beyond tech, affecting critical infrastructure like healthcare and government services, raising concerns over personal privacy and national security. With recurring data breaches becoming the norm, industries must urgently reinforce security measures.

Chief Data Officers and IT risk leaders face mounting pressure as regulatory scrutiny intensifies. The leak highlights the need for proactive data stewardship through encryption, access controls, and real-time threat detection.

Many organisations struggle with legacy systems, decentralised data, and cloud adoption, complicating governance efforts.

Enterprise leaders must treat data as a strategic asset and liability, embedding cybersecurity into business processes and supply chains. Beyond technology, cultivating a culture of accountability and vigilance is essential to prevent costly breaches and protect brand trust.

The massive leak signals a new era in data governance where transparency and relentless improvement are critical. The message is clear: there is no room for complacency in safeguarding the digital world’s most valuable assets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hong Kong builds AI tool for breast cancer diagnosis

Researchers at the Hong Kong University of Science and Technology have unveiled a pioneering AI model called MOME for non-invasive breast cancer diagnosis.

Using China’s largest multiparametric MRI breast cancer dataset, MOME performs at a level comparable to seasoned radiologists and is currently undergoing clinical trials in more than ten hospitals.

Among the institutions participating in the validation phase are Shenzhen People’s Hospital, Guangzhou First Municipal People’s Hospital, and Yunnan Cancer Center. Early results show that MOME excels in predicting response to pre-surgical chemotherapy.

The development highlights the region’s growing capabilities in medtech innovation and could reshape diagnostic strategies for breast cancer across Asia. MOME’s clinical success may also pave the way for similar AI-led models in oncology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI cracks down on misuse of ChatGPT by foreign threat actors

OpenAI has shut down a network of ChatGPT accounts allegedly linked to nation-state actors from Russia, China, Iran, North Korea, and others after uncovering their use in cyber and influence operations.

The banned accounts were used to assist in developing malware, automate social media content, and conduct reconnaissance on sensitive technologies.

According to OpenAI’s latest threat report, a Russian-speaking group used the chatbot to iteratively improve malware code written in Go. Each account was used only once to refine the code before being abandoned, a tactic highlighting the group’s emphasis on operational security.

The malicious software was later disguised as a legitimate gaming tool and distributed online, infecting victims’ devices to exfiltrate sensitive data and establish long-term access.

Chinese-linked groups, including APT5 and APT15, were found using OpenAI’s models for a range of technical tasks—from researching satellite communications to developing scripts for Android app automation and penetration testing.

Other accounts were linked to influence campaigns that generated propaganda or polarising content in multiple languages, including efforts to pose as journalists and simulate public discourse around elections and geopolitical events.

The banned activities also included scams, social engineering, and politically motivated disinformation. OpenAI stressed that although some misuse was detected, none involved sophisticated or large-scale attacks enabled solely by its tools.

The company said it is continuing to improve detection and mitigation efforts to prevent abuse of its models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk’s X tightens control on AI data use

Social media platform X has updated its developer agreement to prohibit the use of its content for training large language models.

The new clause, added under the restrictions section, forbids any attempt to use X’s API or content to fine-tune or train foundational or frontier AI models.

The move follows Elon Musk’s acquisition of X through his AI company xAI, which is developing its own models.

By restricting external access, the company aims to prevent competitors from freely using X’s data while maintaining control over a valuable resource for training AI systems.

X joins a growing list of platforms, including Reddit and The Browser Company, that have introduced terms blocking unauthorised AI training.

The shift reflects a broader industry trend towards limiting open data access amid the rising value of proprietary content in the AI arms race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!