UK universities urged to act fast on AI teaching

UK universities risk losing their competitive edge unless they adopt a clear, forward-looking approach to ΑΙ in teaching. Falling enrolments, limited funding, and outdated digital systems have exposed a lack of AI literacy across many institutions.

As AI skills become essential for today’s workforce, employers increasingly expect graduates to be confident users rather than passive observers.

Many universities continue relying on legacy technology rather than exploring the full potential of modern learning platforms. AI tools can enhance teaching by adapting to individual student needs and helping educators identify learning gaps.

However, few staff have received adequate training, and many universities lack the resources or structure to embed AI into day-to-day teaching effectively.

To close the growing gap between education and the workplace, universities must explore flexible short courses and microcredentials that develop workplace-ready skills.

Introducing ethical standards and data transparency from the start will ensure AI is used responsibly without weakening academic integrity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Delta’s personalised flight costs under scrutiny

Delta Air Lines’ recent revelation about using AI to price some airfares is drawing significant criticism. The airline aims to increase AI-influenced pricing to 20 per cent of its domestic flights by late 2025.

While Delta’s president, Glen Hauenstein, noted positive results from their Fetcherr-supplied AI tool, industry observers and senators are voicing concerns. Critics worry that AI-driven pricing, similar to rideshare surge models, could lead to increased fares for travellers and raise serious data privacy issues.

Senators like Ruben Gallego, Mark Warner, and Richard Blumenthal, highlighted fears that ‘surveillance pricing’ could utilise extensive personal data to estimate a passenger’s willingness to pay.

Despite Delta’s spokesperson denying individualised pricing based on personal information, AI experts suggest factors like device type and Browse behaviour are likely influencing prices, making them ‘deeply personalised’.

Different travellers could be affected unevenly. Bargain hunters with flexible dates might benefit, but business travellers and last-minute bookers may face higher costs. Other airlines like Virgin Atlantic also use Fetcherr’s technology, indicating a wider industry trend.

Pricing experts like Philip Carls warn that passengers won’t know if they’re getting a fair deal, and proving discrimination, even if unintended by AI, could be almost impossible.

American Airlines’ CEO, Robert Isom, has publicly criticised Delta’s move, stating American won’t copy the practice, though past incidents show airlines can adjust fares based on booking data even without AI.

With dynamic pricing technology already permitted, experts anticipate lawmakers will soon scrutinise AI’s role more closely, potentially leading to new transparency mandates.

For now, travellers can try strategies like using incognito mode, clearing cookies, or employing a VPN to obscure their digital footprint and potentially avoid higher AI-driven fares.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China says the US used a Microsoft server vulnerability to launch cyberattacks

China has accused the US of exploiting long-known vulnerabilities in Microsoft Exchange servers to launch cyberattacks on its defence sector, escalating tensions in the ongoing digital arms race between the two superpowers.

In a statement released on Friday, the Cyber Security Association of China claimed that US hackers compromised servers belonging to a significant Chinese military contractor, allegedly maintaining access for nearly a year.

The group did not disclose the name of the affected company.

The accusation is a sharp counterpunch to long-standing US claims that Beijing has orchestrated repeated cyber intrusions using the same Microsoft software. In 2021, Microsoft attributed a wide-scale hack affecting tens of thousands of Exchange servers to Chinese threat actors.

Two years later, another incident compromised the email accounts of senior US officials, prompting a federal review that criticised Microsoft for what it called a ‘cascade of security failures.’

Microsoft, based in Redmond, Washington, has recently disclosed additional intrusions by China-backed groups, including attacks exploiting flaws in its SharePoint platform.

Jon Clay of Trend Micro commented on the tit-for-tat cyber blame game: ‘Every nation carries out offensive cybersecurity operations. Given the latest SharePoint disclosure, this may be China’s way of retaliating publicly.’

Cybersecurity researchers note that Beijing has recently increased its use of public attribution as a geopolitical tactic. Ben Read of Wiz.io pointed out that China now uses cyber accusations to pressure Taiwan and shape global narratives around cybersecurity.

In April, China accused US National Security Agency (NSA) employees of hacking into the Asian Winter Games in Harbin, targeting personal data of athletes and organisers.

While the US frequently names alleged Chinese hackers and pursues legal action against them, China has historically avoided levelling public allegations against American intelligence agencies, until now.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Concerns grow over children’s use of AI chatbots

The growing use of AI chatbots and companions among children has raised safety concerns, with experts warning of inadequate protections and potential emotional risks.

Often not designed for young users, these apps lack sufficient age verification and moderation features, making them vulnerable spaces for children. The eSafety Commissioner noted that many children are spending hours daily with AI companions, sometimes discussing topics like mental health and sex.

Studies in Australia and the UK show high engagement, with many young users viewing the chatbots as real friends and sources of emotional advice.

Experts, including Professor Tama Leaver, warn that these systems are manipulative by design, built to keep users engaged without guaranteeing appropriate or truthful responses.

Despite the concerns, initiatives like Day of AI Australia promote digital literacy to help young people understand and navigate such technologies critically.

Organisations like UNICEF say AI could offer significant educational benefits if applied safely. However, they stress that Australia must take childhood digital safety more seriously as AI rapidly reshapes how young people interact, learn and socialise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s Silk Typhoon hackers filed patents for advanced spyware tools

A Chinese state-backed hacking group known as Silk Typhoon has filed more than ten patents for intrusive cyberespionage tools, shedding light on its operations’ vast scope and sophistication.

These patents, registered by firms linked to China’s Ministry of State Security, detail covert data collection software far exceeding the group’s previously known attack methods.

The revelations surfaced following a July 2025 US Department of Justice indictment against two alleged members of Silk Typhoon, Xu Zewei and Zhang Yu.

Both are associated with companies tied to the Shanghai State Security Bureau and connected to the Hafnium group, which Microsoft rebranded as Silk Typhoon in 2022.

Instead of targeting only Windows environments, the patent filings reveal a sweeping set of surveillance tools designed for Apple devices, routers, mobile phones, and even smart home appliances.

Submissions include software for bypassing FileVault encryption, extracting remote cellphone data, decrypting hard drives, and analysing smart devices. Analysts from SentinelLabs suggest these filings offer an unprecedented glimpse into the architecture of China’s cyberwarfare ecosystem.

Silk Typhoon gained global attention in 2021 with its Microsoft Exchange ProxyLogon campaign, which prompted a rare coordinated condemnation by the US, UK, and EU. The newly revealed capabilities show the group’s operations are far more advanced and diversified than previously believed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Nscale to build an AI super hub in Norway

OpenAI has revealed its first European data centre project in partnership with British startup Nscale, selecting Norway as the location for what is being called ‘Stargate Norway’.

The initiative mirrors the company’s ambitious $500 billion US ‘Stargate’ infrastructure plan and reflects Europe’s growing demand for large-scale AI computing capacity.

Nscale will lead the development of a $1 billion AI gigafactory in Norway, with engineering firm Aker matching the investment. These advanced data centres are designed to meet the heavy processing requirements of cutting-edge AI models.

OpenAI expects the facility to deliver 230MW of computing power by the end of 2026, making it a significant strategic foothold for the company on the continent.

Sam Altman, CEO of OpenAI, stated that Europe needs significantly more computing to unlock AI’s full potential for researchers, startups, and developers. He said Stargate Norway will serve as a cornerstone for driving innovation and economic growth in the region.

Nscale confirmed that Norway’s AI ecosystem will receive priority access to the facility, while remaining capacity will be offered to users across the UK, Nordics and Northern Europe.

The data centre will support 100,000 of NVIDIA’s most advanced GPUs, with long-term plans to scale as demand grows.

The move follows broader European efforts to strengthen AI infrastructure, with the UK and France pushing for major regulatory and funding reforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Scattered Spider cyberattacks set to intensify, warn FBI and CISA

The cybercriminal group known as Scattered Spider is expected to intensify its attacks in the coming weeks, according to a joint warning issued by the FBI, CISA, and cybersecurity agencies in Canada, the UK and Australia.

These warnings highlight the group’s increasingly sophisticated methods, including impersonating employees to bypass IT support and hijack multi-factor authentication processes.

Instead of relying on old techniques, the hackers now deploy stealthy tools like RattyRAT and DragonForce ransomware, particularly targeting VMware ESXi servers.

Their attacks combine social engineering with SIM swapping and phishing, enabling them to exfiltrate sensitive data before locking systems and demanding payment — a tactic known as double extortion.

Scattered Spider, also referred to as Okta Tempest, is reportedly creating fake online identities and infiltrating internal communication channels like Slack and Microsoft Teams. In some cases, they have even joined incident response calls to gain insight into how companies are reacting.

Security agencies urge organisations to adopt phishing-resistant multi-factor authentication, audit remote access software, monitor unusual logins and behaviours, and ensure offline encrypted backups are maintained.

More incidents are expected, as the group continues refining its strategies instead of slowing down.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU will launch an empowering digital age verification system by 2026

The European Union will roll out digital age verification across all member states by 2026. Under the Digital Services Act, this mandate requires platforms to verify user age using the new EU Digital Identity Wallet (EUDIW). Non-compliance could lead to fines of up to €18 million or 10% of global turnover.

Initially, five countries will pilot the system designed to protect minors and promote online safety. The EUDIW uses privacy-preserving cryptographic proofs, allowing users to prove they are over 18 without uploading personal IDs.

Unlike the UK’s ID-upload approach, which triggered a rise in VPN usage, the EU model prioritises user anonymity and data minimisation. Scytales and T-Systems develop the system.

Despite its benefits, privacy advocates have flagged concerns. Although anonymised, telecom providers could potentially analyse network-level signals to infer user behaviour.

Beyond age checks, the EUDIW will store and verify other credentials, including diplomas, licenses, and health records. That initiative aims to create a trusted, cross-border digital identity ecosystem across Europe.

As a result, platforms and marketers must adapt. Behavioural tracking and personalised ads may become harder to implement. Smaller businesses might struggle with technical integration and rising compliance costs.

However, centralised control also raises risks. These include potential phishing attacks, service disruptions, and increased government visibility over online activity.

If successful, the EU’s digital identity model could inspire global adoption. It offers a privacy-first alternative to commercial or surveillance-heavy systems and marks a major leap forward in digital trust and safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China demands Nvidia explain security flaws in H20 chips

China’s top internet regulator has summoned Nvidia to explain alleged security concerns linked to its H20 computing chips.

The Cyberspace Administration of China stated that the chips, which are sold domestically, may contain backdoor vulnerabilities that could pose risks to users and systems.

Instead of ignoring the issue, Nvidia has been asked to submit technical documents and provide a formal response addressing these potential flaws.

The chips are part of Nvidia’s tailored product line for the Chinese market following US export restrictions on advanced AI processors.

The investigation signals tighter scrutiny from Chinese authorities on foreign technology amid ongoing geopolitical tensions and a global race for semiconductor dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!