Turing Institute urges stronger AI research security

The Alan Turing Institute has warned that urgent action is needed to protect the UK’s AI research from espionage, intellectual property theft and risky international collaborations.

Its Centre for Emerging Technology and Security (CETaS) has published a report calling for a culture shift across academia to better recognise and mitigate these risks.

The report highlights inconsistencies in how security risks are understood within universities and a lack of incentives for researchers to follow government guidelines. Sensitive data, the dual-use potential of AI, and the risk of reverse engineering make the field particularly vulnerable to foreign interference.

Lead author Megan Hughes stressed the need for a coordinated response, urging government and academia to find the right balance between academic freedom and security.

The report outlines 13 recommendations, including expanding support for academic due diligence and issuing clearer guidance on high-risk international partnerships.

Further proposals call for compulsory research security training, better threat communication from national agencies, and standardised risk assessments before publishing AI research.

The aim is to build a more resilient research ecosystem as global interest in UK-led AI innovation continues to grow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta and TikTok contest the EU’s compliance charges

Meta and TikTok have taken their fight against an the EU supervisory fee to Europe’s second-highest court, arguing that the charges are disproportionate and based on flawed calculations.

The fee, introduced under the Digital Services Act (DSA), requires major online platforms to pay 0.05% of their annual global net income to cover the European Commission’s oversight costs.

Meta questioned the Commission’s methodology, claiming the levy was based on the entire group’s revenue instead of the specific EU-based subsidiary.

The company’s lawyer told judges it still lacked clarity on how the fee was calculated, describing the process as opaque and inconsistent with the spirit of the law.

TikTok also criticised the charge, alleging inaccurate and discriminatory data inflated its payment.

Its legal team argued that user numbers were double-counted when people switched between devices. The Commission had wrongly calculated fees based on group profits rather than platform-specific earnings.

The Commission defended its approach, saying group resources should bear the cost when consolidated accounts are used. A ruling is expected from the General Court sometime next year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI startup faces lawsuit from Disney and Universal

Two of Hollywood’s most powerful studios, Disney and Universal, have launched a copyright infringement lawsuit against the AI firm Midjourney, accusing it of illegally replicating iconic characters.

The studios claim the San Francisco-based company copied their creative works without permission, describing it as a ‘bottomless pit of plagiarism’.

Characters such as Darth Vader, Elsa, and the Minions were cited in the 143-page complaint, which alleges Midjourney used these images to train its AI system and generate similar content.

Disney and Universal argue that the AI firm failed to invest in the creative process, yet profited heavily from the output — reportedly earning $US300 million in paid subscriptions last year.

Despite early attempts by the studios to raise concerns and propose safeguards already adopted by other AI developers,

Midjourney allegedly ignored them and pressed ahead with further product releases. The company, which calls itself a small, self-funded team of 11, has declined to comment on the lawsuit directly but insists it has a long future ahead.

Disney’s legal chief, Horacio Gutierrez, stressed the importance of protecting creative works that result from decades of investment. While supporting AI as a tool for innovation, he maintained that ‘piracy is piracy’, regardless of whether humans or machines carry it out.

The studios are seeking damages and a court order to stop the AI firm from continuing its alleged copyright violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia halts AI summaries test after backlash

Wikipedia has paused a controversial trial of AI-generated article summaries following intense backlash from its community of volunteer editors.

The Wikimedia Foundation had planned a two-week opt-in test for mobile users using summaries produced by Aya, an open-weight AI model developed by Cohere.

However, the reaction from editors was swift and overwhelmingly negative. The discussion page became flooded with objections, with contributors arguing that such summaries risked undermining the site’s reputation for neutrality and accuracy.

Some expressed concerns that inserting AI content would override Wikipedia’s long-standing collaborative approach by effectively installing a single, unverifiable voice atop articles.

Editors warned that AI-generated summaries lacked proper sourcing and could compromise the site’s credibility. Recent AI blunders by other tech giants, including Google’s glue-on-pizza mishap and Apple’s false death alert, were cited as cautionary examples of reputational risk.

For many, the possibility of similar errors appearing on Wikipedia was unacceptable.

Marshall Miller of the Wikimedia Foundation acknowledged the misstep in communication and confirmed the project’s suspension.

While the Foundation remains interested in exploring AI to improve accessibility, it has committed to ensuring any future implementation involves direct participation from the Wikipedia community.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sam Altman predicts AI will discover new ideas

In a new blog post titled The Gentle Singularity, OpenAI CEO Sam Altman predicted that AI systems capable of producing ‘novel insights’ may arrive as early as 2026.

While Altman’s essay blends optimism with caution, it subtly signals the company’s next central ambition — creating AI that goes beyond repeating existing knowledge and begins generating original ideas instead of mimicking human reasoning.

Altman’s comments echo a broader industry trend. Researchers are already using OpenAI’s recent o3 and o4-mini models to generate new hypotheses. Competitors like Google, Anthropic and FutureHouse are also shifting their focus towards scientific discovery.

Google’s AlphaEvolve has reportedly devised novel solutions to complex maths problems, while FutureHouse claims to have built AI capable of genuine scientific breakthroughs.

Despite the optimism, experts remain sceptical. Critics argue that AI still struggles to ask meaningful questions, a key ingredient for genuine insight.

Former OpenAI researcher Kenneth Stanley, now leading Lila Sciences, says generating creative hypotheses is a more formidable challenge than agentic behaviour. Whether OpenAI achieves the leap remains uncertain, but Altman’s essay may hint at the company’s next bold step.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Massive leak exposes data of millions in China

Cybersecurity researchers have uncovered a brief but significant leak of over 600 gigabytes of data, exposing information on millions of Chinese citizens.

The haul, containing WeChat, Alipay, banking, and residential records, is part of a centralised system, possibly aimed at large-scale surveillance instead of a random data breach.

According to research from Cybernews and cybersecurity consultant Bob Diachenko, the data was likely used to build individuals’ detailed behavioural, social and economic profiles.

They warned the information could be exploited for phishing, fraud, blackmail or even disinformation campaigns instead of remaining dormant. Although only 16 datasets were reviewed before the database vanished, they indicated a highly organised and purposeful collection effort.

The source of the leak remains unknown, but the scale and nature of the data suggest it may involve government-linked or state-backed entities rather than lone hackers.

The exposed information could allow malicious actors to track residence locations, financial activity and personal identifiers, placing millions at risk instead of keeping their lives private and secure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK regulator probes 4chan over online safety rules

The UK communications regulator Ofcom has launched an investigation into the controversial message board 4chan for potentially breaching new online safety laws. Under the Online Safety Act, platforms must assess and manage risks related to illegal content affecting UK users.

Ofcom stated that it requested 4chan’s risk assessment in April but received no response, prompting a formal inquiry into whether the site failed to meet its duty to protect users. The nature of the illegal content being scrutinised has not been disclosed.

The regulator emphasised that it has the authority to fine companies up to £18 million or 10% of their global revenue, depending on which is higher. That move marks a significant test of the UK’s stricter regulatory powers to hold online services accountable.

The watchdog’s concerns stem from user anonymity on 4chan, which has historically made the platform a hotspot for controversial, offensive, and often extreme content. A recent cyberattack further complicated matters, rendering parts of the website offline for over a week.

Alongside 4chan, Ofcom is also investigating pornographic site First Time Videos for failing to prove robust age verification systems are in place to block access by under-18s. This is part of a broader crackdown as platforms with age-restricted content face a July deadline to implement effective safeguards, which may include facial age-estimation technology.

Additionally, seven lesser-known file-sharing services, including Krakenfiles and Yolobit, are being scrutinised for potentially hosting child sexual abuse material. Like 4chan, these platforms reportedly failed to respond to Ofcom’s information requests. The regulator’s growing list of investigations signals a tougher era for digital platforms operating in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s AI tools disabled for gaokao exam

As millions of high school students across China began the rigorous ‘gaokao’ college entrance exam, the country’s leading tech companies took unprecedented action by disabling AI features on their popular platforms.

Apps from Tencent, ByteDance, and Moonshot AI temporarily blocked functionalities like photo recognition and real-time question answering. This move aimed to prevent students from using AI chatbots to cheat during the critical national examination, which largely dictates university admissions in China.

This year, approximately 13.4 million students are participating in the ‘gaokao,’ a multi-day test that serves as a pivotal determinant for social mobility, particularly for those from rural or lower-income backgrounds.

The immense pressure associated with the exam has historically fueled intense test preparation. However, screenshots circulating on Chinese social media app Rednote confirmed that AI chatbots like Tencent’s YuanBao, ByteDance’s Doubao, and Moonshot AI’s Kimi displayed messages indicating the temporary closure of exam-relevant features to ensure fairness.

China’s ‘gaokao’ exam highlights a balanced approach to AI: promoting its education from a young age, with compulsory instruction in Beijing schools this autumn, while firmly asserting it’s for learning, not cheating. Regulators draw a clear line, reinforcing that AI aids development, but never compromises academic integrity.

This coordinated action by major tech firms reinforces the message that AI has no place in the examination hall, despite China’s broader push to cultivate an AI-literate generation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Growing push in Europe to regulate children’s social media use

Several European countries, led by Denmark, France, and Greece, are intensifying efforts to shield children from the potentially harmful effects of social media. With Denmark taking over the EU Council presidency from July, its Digital Minister, Caroline Stage Olsen, has made clear that her country will push for a ban on social media for children under 15.

Olsen criticises current platforms for failing to remove illegal content and relying on addictive features that encourage prolonged use. She also warned that platforms prioritise profit and data harvesting over the well-being of young users.

That initiative builds on growing concern across the EU about the mental and physical toll social media may take on children, including the spread of dangerous content, disinformation, cyberbullying, and unrealistic body image standards. France, for instance, has already passed legislation requiring parental consent for users under 15 and is pressing platforms to verify users’ ages more rigorously.

While the European Commission has issued draft guidelines to improve online safety for minors, such as making children’s accounts private by default, some countries are calling for tougher enforcement under the EU’s Digital Services Act. Despite these moves, there is currently no consensus across the EU for an outright ban.

Cultural differences and practical hurdles, like implementing consistent age verification, remain significant challenges. Still, proposals are underway to introduce a unified age of digital adulthood and a continent-wide age verification application, possibly even embedded into devices, to limit access by minors.

Olsen and her allies remain adamant, planning to dedicate the October summit of the EU digital ministers entirely to the issue of child online safety. They are also looking to future legislation, like the Digital Fairness Act, to enforce stricter consumer protection standards that explicitly account for minors. Meanwhile, age verification and parental controls are seen as crucial first steps toward limiting children’s exposure to addictive and damaging online environments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Apple reveals new AI features at WWDC

Apple has unveiled a range of AI features at its annual Worldwide Developers Conference, focusing on tighter privacy, enhanced user tools and broader integration with OpenAI’s ChatGPT. These updates will appear across iOS 26, iPadOS 26, macOS 26 and visionOS 26, set to launch in autumn.

While Apple Intelligence was first teased last year, the company now allows third-party developers to access its on-device AI models for the first time.

CEO Tim Cook and software chief Craig Federighi outlined how these features are intended to offer more personalised, efficient apps. Users of newer iPhones will benefit from tools such as live translation in Messages and FaceTime, and AI-powered image analysis via Visual Intelligence.

Apple also enables users to blend emojis creatively and use ChatGPT through its Image Playground to stylise photos. Enhancements to the Wallet app will help summarise order tracking from emails, and AI-generated voices will offer fitness updates.

Despite these innovations, Apple’s redesign of Siri remains incomplete and is not expected to launch soon.

The event failed to deliver major surprises, as many details had already been leaked. Investors responded cautiously, sending Apple shares down by 1.2%. The firm has lost 20% of its value in the year and no longer holds the top spot as the world’s most valuable company.

Nonetheless, Apple is expected to reveal more AI advancements in 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!