AI in higher education: A mixed blessing for students and institutions

AI rapidly reshapes university life, offering students new tools to boost creativity, structure assignments, and develop ideas more efficiently. At institutions like Oxford Brookes University, students like 22-year-old Sunjaya Phillips have found that AI enhances confidence and productivity when used responsibly, with faculty guidance.

She describes AI as a ‘study buddy’ that transformed her academic experience, especially during creative blocks, where AI-generated prompts saved valuable time. However, the rise of AI in academia also raises important concerns.

A global student survey revealed that while many embrace AI in their studies, a majority fear its long-term implications on employment. Some admit to misusing the technology for dishonest purposes, highlighting the ethical challenges it presents.

Experts like Dr Charlie Simpson from Oxford Brookes caution that relying too heavily on AI to ‘do the thinking’ undermines educational goals and may devalue the learning process.

Despite these concerns, many educators and institutions remain optimistic about AI’s potential—if used wisely. Professor Keiichi Nakata from Henley Business School stresses that AI is not a replacement but a powerful aid, likening its expected workplace relevance to today’s basic IT skills.

He and others argue that responsible AI use could elevate the capabilities of future graduates and reshape degree expectations accordingly. While some students worry about job displacement, others, like Phillips, view AI as a support system rather than a threat.

The consensus among academics is clear: success in the age of AI will depend not on avoiding the technology, but on mastering it with discernment, ethics, and adaptability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT adds meeting recording and cloud access

OpenAI has launched new features for ChatGPT that allow it to record meetings, transcribe conversations, and pull information directly from cloud platforms like Google Drive and SharePoint.

Instead of relying on typed input alone, users can now speak to ChatGPT, which records audio, creates editable summaries, and helps generate follow-up content such as emails or project outlines.

‘Record’ is currently available to Team users via the macOS app and will soon expand to Enterprise and Edu accounts.

The recording tool automatically deletes the audio after transcription and applies existing workspace data rules, ensuring recordings are not used for training.

Instead of leaving notes scattered across different platforms, users gain a structured and searchable history of conversations, voice notes, or brainstorming sessions, which ChatGPT can recall and apply during future interactions.

At the same time, OpenAI has introduced new connectors for business users that let ChatGPT access files from cloud services like Dropbox, OneDrive, Box, and others.

These connectors allow ChatGPT to search and summarise information from internal documents, rather than depending only on web search or user uploads. The update also includes beta support for Deep Research agents that can work with tools like GitHub and HubSpot.

OpenAI has embraced the Model Context Protocol, an open standard allowing organisations to build their own custom connectors for proprietary tools.

Rather than serving purely as a general-purpose chatbot, ChatGPT is evolving into a workplace assistant capable of tapping into and understanding a company’s complete knowledge base.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

M&S CEO targeted by hackers in abusive ransom email

Marks & Spencer has been directly targeted by a ransomware group calling itself DragonForce, which sent a vulgar and abusive ransom email to CEO Stuart Machin using a compromised employee email address.

The message, laced with offensive language and racist terms, demanded that Machin engage via a darknet portal to negotiate payment. It also claimed that the hackers had encrypted the company’s servers and stolen customer data, a claim M&S eventually acknowledged weeks later.

The email, dated 23 April, appears to have been sent from the account of an Indian IT worker employed by Tata Consultancy Services (TCS), a long-standing M&S tech partner.

TCS has denied involvement and stated that its systems were not the source of the breach. M&S has remained silent publicly, neither confirming the full scope of the attack nor disclosing whether a ransom was paid.

The cyber attack has caused major disruption, costing M&S an estimated £300 million and halting online orders for over six weeks.

DragonForce has also claimed responsibility for a simultaneous attack on the Co-op, which left some shelves empty for days. While nothing has yet appeared on DragonForce’s leak site, the group claims it will publish stolen information soon.

Investigators believe DragonForce operates as a ransomware-as-a-service collective, offering tools and platforms to cybercriminals in exchange for a 20% share of any ransom.

Some experts suspect the real perpetrators may be young hackers from the West, linked to a loosely organised online community called Scattered Spider. The UK’s National Crime Agency has confirmed it is focusing on the group as part of its inquiry into the recent retail hacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Epic adds AI NPC tools to Fortnite as Vader voice sparks union clash

Epic Games is launching new tools for Fortnite creators that enable them to build AI-powered non-player characters (NPCs), following the debut of an AI-generated Darth Vader that players can talk to in-game.

The feature, which reproduces the iconic voice of James Earl Jones using AI, marks a significant step in interactive gaming—but also comes with its share of challenges and controversy.

According to The Verge, Epic encountered several difficulties in fine-tuning Vader’s voice and responses to feel authentic and fit smoothly into gameplay. ‘The culmination of a very intense effort for a character everybody understands,’ said Saxs Persson, executive vice president of the Fortnite ecosystem.

Persson noted that the team worked carefully to ensure that when Vader joins a player’s team, he behaves as a fearsome and aggressive ally—true to his cinematic persona.

However, the rollout wasn’t entirely smooth. In a live-streamed session, popular Fortnite creator Loserfruit prompted Vader to swear, exposing the system’s content filtering flaws. Epic responded quickly with patches and has since implemented multiple layers of safety checks.

‘We do our best job on day one,’ said Persson, ‘but more importantly, we’re ready to surround the problem and have fixes in place as fast as possible.’

Now, Fortnite creators will have access to the same suite of AI tools and safety systems used to develop Vader. They can control voice tone, dialogue, and NPC behaviour while relying on Epic’s safeguards to avoid inappropriate interactions.

The feature launch comes at a sensitive moment, as actor union SAG-AFTRA has filed a complaint against Epic Games over using AI to recreate Vader’s voice.

The union claims that Llama Productions, an Epic subsidiary, employed the technology without consulting or bargaining with the union, replacing the work of human voice actors.

‘We must protect our right to bargain terms and conditions around uses of voice that replace the work of our members,’ SAG-AFTRA said, emphasising its support for actors and estates in managing the use of digital replicas.

As Epic expands its AI capabilities in gaming, it faces both the technical challenges of responsible deployment and the growing debate around AI’s impact on creative professions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit accuses Anthropic of misusing user content

Reddit has taken legal action against AI startup Anthropic, alleging that the company scraped its platform without permission and used the data to train and commercialise its Claude AI models.

The lawsuit, filed in San Francisco’s Superior Court, accuses Anthropic of breaching contract terms, unjust enrichment, and interfering with Reddit’s operations.

According to Reddit, Anthropic accessed the platform more than 100,000 times despite publicly claiming to have stopped doing so.

The complaint claims Anthropic ignored Reddit’s technical safeguards, such as robots.txt files, and bypassed the platform’s user agreement to extract large volumes of user-generated content.

Reddit argues that Anthropic’s actions undermine its licensing deals with companies like OpenAI and Google, who have agreed to strict content usage and deletion protocols.

The filing asserts that Anthropic intentionally used personal data from Reddit without ever seeking user consent, calling the company’s conduct deceptive. Despite public statements suggesting respect for privacy and web-scraping limitations, Anthropic is portrayed as having disregarded both.

The lawsuit even cites Anthropic’s own 2021 research that acknowledged Reddit content as useful in training AI models.

Reddit is now seeking damages, repayment of profits, and a court order to stop Anthropic from using its data further. The market responded positively, with Reddit’s shares closing nearly 67% higher at $118.21—indicating investor support for the company’s aggressive stance on data protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp to add usernames for better privacy

WhatsApp is preparing to introduce usernames, allowing users to hide their phone numbers and opt for a unique ID instead. Meta’s push reflects growing demand for more secure and anonymous communication online.

Currently in development and not yet available for testing, the new feature will let users create usernames with letters, numbers, periods, and underscores, while blocking misleading formats like web addresses.

The move aims to improve privacy by letting users connect without revealing personal contact details. A system message will alert contacts whenever a username is updated, adding transparency to the process.

Although still in beta, the feature is expected to roll out soon, bringing WhatsApp in line with other major messaging platforms that already support username-based identities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google email will reply by using your voice

Google is building a next-generation email system that uses generative AI to reply to mundane messages in your own tone, according to DeepMind CEO Demis Hassabis.

Speaking at SXSW London, Hassabis said the system would handle everyday emails instead of requiring users to write repetitive responses themselves.

Hassabis called email ‘the thing I really want to get rid of,’ and joked he’d pay thousands each month for that luxury. He emphasised that while AI could help cure diseases or combat climate change, it should also solve smaller daily annoyances first—like managing inbox overload.

The upcoming feature aims to identify routine emails and draft replies that reflect the user’s writing style, potentially making decisions on simpler matters.

While details are still limited, the project remains under development and could debut as part of Google’s premium AI subscription model before reaching free-tier users.

Gmail already includes generative tools that adjust message tone, but the new system goes further—automating replies instead of just suggesting edits.

Hassabis also envisioned a universal AI assistant that protects users’ attention and supports digital well-being, offering personalised recommendations and taking care of routine digital tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Salt Typhoon and Silk Typhoon reveal weaknesses

Recent revelations about Salt Typhoon and Silk Typhoon have exposed severe weaknesses in how organisations secure their networks.

These state-affiliated hacking groups have demonstrated that modern cyber threats come from well-resourced and coordinated actors instead of isolated individuals.

Salt Typhoon, responsible for one of the largest cyber intrusions into US infrastructure, exploited cloud network vulnerabilities targeting telecom giants like AT&T and Verizon, forcing companies to reassess their reliance on traditional private circuits.

Many firms continue to believe private circuits offer better protection simply because they are off the public internet. Some even add MACsec encryption for extra defence. However, MACsec’s ‘hop-by-hop’ design introduces new risks—data is repeatedly decrypted and re-encrypted at each routing point.

Every one of these hops becomes a possible target for attackers, who can intercept, manipulate, or exfiltrate data without detection, especially when third-party infrastructure is involved.

Beyond its security limitations, MACsec presents high operational complexity and cost, making it unsuitable for today’s cloud-first environments. In contrast, solutions like Internet Protocol Security (IPSec) offer simpler, end-to-end encryption.

Although not perfect in cloud settings, IPSec can be enhanced through parallel connections or expert guidance. The Cybersecurity and Infrastructure Security Agency (CISA) urges organisations to prioritise complete encryption of all data in transit, regardless of the underlying network.

Silk Typhoon has further amplified concerns by exploiting privileged credentials and cloud APIs to infiltrate both on-premise and cloud systems. These actors use covert networks to maintain long-term access while remaining hidden.

As threats evolve, companies must adopt Zero Trust principles, strengthen identity controls, and closely monitor their cloud environments instead of relying on outdated security models.

Collaborating with cloud security experts can help shut down exposure risks and protect sensitive data from sophisticated and persistent threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

HMRC got targeted in a £47 million UK fraud

A phishing scheme run by organised crime groups cost the UK government £47 million, according to officials from His Majesty’s Revenue and Customs.

Criminals posed as taxpayers to claim payments using fake or hijacked credentials. Rather than a cyberattack, the operation relied on impersonation and did not involve the theft of taxpayer data.

Angela MacDonald, HMRC’s deputy chief executive, confirmed to Parliament’s Treasury Committee that the fraud took place in 2024. The stolen funds were taken through three separate payments, though HMRC managed to block an additional £1.9 million attempt.

Officials began a cross-border criminal investigation soon after discovering the scam, which has led to arrests.

Around 100,000 PAYE accounts — typically used by employers for employee tax and national insurance payments — were either created fraudulently or accessed illegally.

Banks were also targeted through the use of HMRC-linked identity information. Customers first flagged the issue when they noticed unusual activity.

HMRC has shut down the fake accounts and removed false data as part of its response. John-Paul Marks, HMRC’s chief executive, assured the committee that the incident is now under control and contained. ‘That is a lot of money and unacceptable,’ MacDonald told MPs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber attack hits Lee Enterprises staff data

Thousands of current and former employees at Lee Enterprises have had their data exposed following a cyberattack earlier this year.

Hackers accessed to the company’s systems in early February, compromising sensitive information such as names and Social Security numbers before the breach was contained the same day.

Although the media firm, which operates over 70 newspapers across 26 US states, swiftly secured its networks, a three-month investigation involving external cybersecurity experts revealed that attackers accessed databases containing employee details.

The breach potentially affects around 40,000 individuals — far more than the company’s 4,500 current staff — indicating that past employees were also impacted.

The stolen data could be used for identity theft, fraud or phishing attempts. Criminals may even impersonate affected employees to infiltrate deeper into company systems and extract more valuable information.

Lee Enterprises has notified those impacted and filed relevant disclosures with authorities, including the Maine Attorney General’s Office.

Headquartered in Iowa, Lee Enterprises draws over 200 million monthly online page views and generated over $611 million in revenue in 2024. The incident underscores the ongoing vulnerability of media organisations to cyber threats, especially when personal employee data is involved.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!