OpenAI leadership battles talent exodus

OpenAI is scrambling to retain its top researchers after Meta launched a bold recruitment drive. Chief Research Officer Mark Chen likened the situation to a break-in at home and reassured staff that leadership is actively addressing the issue.

Meta has reportedly offered signing bonuses of up to $100 million to entice senior OpenAI staff. Chen and CEO Sam Altman have responded by reviewing compensation packages and exploring creative retention incentives, assuring fairness in the process.

The recruitment push comes as Meta intensifies efforts in AI, investing heavily in its superintelligence lab and targeting experts from OpenAI, Google DeepMind, and Scale AI.

OpenAI has encouraged staff to resist pressure to make quick decisions, especially during its scheduled recharge week, emphasising the importance of the broader mission over short-term gains.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lung cancer caught early thanks to AI

A 69-year-old woman from Surrey has credited AI with saving her life after it detected lung cancer that human radiologists initially missed.

The software flagged a concerning anomaly in a chest X-ray that had been given the all-clear, prompting urgent follow-up and surgery.

NHS hospitals increasingly use AI tools like Annalise.ai, which analyses scans and prioritises urgent cases for radiologists.

Dianne Covey, whose tumour was caught at stage one, avoided chemotherapy or radiotherapy and has since made a full recovery.

With investments exceeding £36 million, the UK government and NHS are rapidly deploying AI to improve cancer diagnosis rates and reduce waiting times. AI has now been trialled or implemented across more than 45 NHS trusts and is also used for skin and prostate cancer detection.

Doctors and technologists say AI is not replacing medical professionals but enhancing their capabilities by highlighting critical cases and improving speed.

Experts warn that outdated machines, biassed training data and over-reliance on consumer AI tools remain risks to patient outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Balancing security and usability in digital authentication

A report by the FIDO Alliance revealed that 53% of consumers observed an increase in suspicious messages in 2024, with SMS, emails, and phone calls being the primary vectors.

As digital scams and AI-driven fraud rise, businesses face growing pressure to strengthen authentication methods without compromising user experience.

No clear standard has emerged despite the range of available authentication options—including passkeys, one-time passwords (OTP), multi-factor authentication (MFA), and biometric systems.

Industry experts warn that focusing solely on advanced tools can lead to overlooking basic user needs. Minor authentication hurdles such as CAPTCHA errors have led to customer drop-offs and failed transactions.

Organisations are exploring risk-based, adaptive authentication models that adjust security levels based on user behaviour and context. The systems could eventually replace static logins with continuous, behind-the-scenes verification.

AI complicates the landscape further. As autonomous assistants handle tasks like booking tickets or making purchases, distinguishing legitimate user activity from malicious bots becomes increasingly tricky.

With no universal solution, experts say businesses must offer a flexible range of secure options tailored to user preferences. The challenge remains to find the right balance between security and usability in an evolving threat environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan leads in AI defence of democracy

Taiwan has emerged as a global model for using AI to defend democracy, earning recognition for its success in combating digital disinformation.

The island joined a new international coalition led by the International Foundation for Electoral Systems to strengthen election integrity through AI collaboration.

Constantly targeted by foreign actors, Taiwan has developed proactive digital defence systems that serve as blueprints for other democracies.

Its rapid response strategies and tech-forward approach have made it a leader in countering AI-powered propaganda.

While many nations are only beginning to grasp the risks posed by AI to democratic systems, Taiwan has already faced these threats and adapted.

Its approach now shapes global policy discussions around safeguarding elections in the digital era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands AI ambitions with more OpenAI hires

According to a report published by The Information on Sunday, Meta Platforms has hired four additional researchers from OpenAI.

The researchers—Shengjia Zhao, Jiahui Yu, Shuchao Bi, and Hongyu Ren—are set to join Meta’s AI team as part of a broader recruitment drive. All four were previously involved in AI development at OpenAI, the Microsoft-backed company behind ChatGPT and other generative models.

Earlier in the week, The Wall Street Journal reported that Meta had hired three more OpenAI researchers—Lucas Beyer, Alexander Kolesnikov and Xiaohua Zhai—based in the firm’s Zurich office.

The hires reflect Meta’s increased investment in advanced AI research, particularly in ‘superintelligence’, a term CEO Mark Zuckerberg has used to describe future AI capabilities.

Meta and OpenAI have not yet responded to requests for comment. Reuters noted that it could not independently verify the hiring details at the time of reporting.

With growing competition among tech giants in AI innovation, Meta’s continued talent acquisition suggests a clear intention to strengthen its internal capabilities through strategic hiring.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT emerges as a search alternative, but Google holds ground

ChatGPT is now used by over 400 million people weekly and ranks the eighth most visited website globally. While many users rely on it for tasks like writing, productivity, and planning, a growing number are also turning to it for search — a space long dominated by Google.

Despite its popularity, experts say ChatGPT won’t fully replace Google. Rohan Sarin, a former product lead at Google and Microsoft, argues that the two serve different purposes. Google excels at direct, fact-based queries, while ChatGPT is better suited for exploration and synthesis.

Google connects users to the raw internet,’ Sarin notes, ‘whereas ChatGPT acts as an interpreter, helping users frame ideas and questions.’

The comparison also highlights user behaviour. While Google remains the tool of choice for verifying information, Sarin points out that many users want ‘something that works’, not necessarily precision — a strength of ChatGPT’s fast, ad-free responses.

However, industry experts don’t expect Google’s dominance to end soon. Eric M. Hoover, SEO director at Jellyfish, says Google’s integration of AI tools like Gemini and AI Overviews will help it stay competitive. ‘Search is still built into browsers, apps, and digital ecosystems,’ he adds.

Rather than one replacing the other, experts believe both platforms will coexist. ChatGPT is changing how we explore information, but Google’s role in search remains vital — especially for accuracy and source verification. For now, the best approach may not be choosing one tool over the other but knowing when to use each.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware victims still paying, Sophos finds

Nearly half of ransomware victims paid the attackers last year, according to Sophos. In its 2025 survey of 3,400 IT pros, 49% admitted to making payments—just below last year’s record.

Ransom amounts dropped significantly, with median payments falling 50% and demand amounts down a third. Yet backup usage also hit a six-year low, used by just 54% of firms for recovery.

Attackers often exploited known vulnerabilities (32%) or unknown security gaps (40%), highlighting persistent weaknesses. Sophos noted many companies now accept ransomware as a business risk.

CISA warned that CVE-2024-54085 in AMI MegaRAC firmware is under active exploitation elsewhere. The bug allows attackers to bypass authenticating remotely.

Varonis flagged abuse of Microsoft’s Direct Send email feature in a phishing campaign affecting over 70 organisations. Disabling it is advised if not essential.

Rapid7 also found critical vulnerabilities in Brother printers. One flaw rated CVSS 9.8, allows password theft and cannot be patched—users must change defaults.

Finally, Google will roll out new Gemini AI features to Android users starting on July 7, even for those with app activity disabled.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark proposes landmark law to protect citizens from deepfake misuse

Denmark’s Ministry of Culture has introduced a draft law aimed at safeguarding citizens’ images and voices under national copyright legislation, Azernews reports. The move marks a significant step in addressing the misuse of deepfake technologies.

The proposed bill prohibits using an individual’s likeness or voice without prior consent, enabling affected individuals to claim compensation. While satire and parody remain exempt, the legislation explicitly bans the unauthorised use of deepfakes in artistic performances.

Under the proposed framework, online platforms that fail to remove deepfake content upon request could be subject to fines. The legislation will apply only within Denmark and is expected to pass with up to 90% parliamentary support.

The bill follows recent incidents involving manipulated videos of Denmark’s Prime Minister and legal challenges against the creators of pornographic deepfakes.

If adopted, Denmark would become the first country in the region to implement such legal measures. The proposal is expected to spark broader discussions across Europe on the ethical boundaries of AI-generated content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI training with pirated books triggers massive legal risk

A US court has ruled that AI company Anthropic engaged in copyright infringement by downloading millions of pirated books to train its language model, Claude.

Although the court found that using copyrighted material for AI training could qualify as ‘fair use’ under US law when the content is transformed, it also held that acquiring the content illegally instead of licensing it lawfully constituted theft.

Judge William Alsup described AI as one of the most transformative technologies of our time. Still, he stated that Anthropic obtained millions of digital books from pirate sites such as LibGen and Pirate Library Mirror.

He noted that buying the same books later in print form does not erase the initial violation, though it may reduce potential damages.

The penalties for wilful copyright infringement in the US could reach up to $150,000 per work, meaning total compensation might run into the billions.

The case highlights the fine line between transformation and theft and signals growing legal pressure on AI firms to respect intellectual property instead of bypassing established licensing frameworks.

Australia, which uses a ‘fair dealing’ system rather than ‘fair use’, already offers flexible licensing schemes through organisations like the Copyright Agency.

CEO Josephine Johnston urged policymakers not to weaken Australia’s legal framework in favour of global tech companies, arguing that licensing provides certainty for developers and fair payment to content creators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New NHS plan adds AI to protect patient safety

The NHS is set to introduce a world-first AI system to detect patient safety risks early by analysing hospital data for warning signs of deaths, injuries, or abuse.

Instead of waiting for patterns to emerge through traditional oversight, the AI will use near real-time data to trigger alerts and launch rapid inspections.

Health Secretary Wes Streeting announced that a new maternity-focused AI tool will roll out across NHS trusts in November. It will monitor stillbirths, brain injuries and death rates, helping identify issues before they become scandals.

The initiative forms part of a new 10-year plan to modernise the health service and move it from analogue to digital care.

The technology will send alerts to the Care Quality Commission, whose teams will investigate flagged cases. Professor Meghana Pandit, NHS England’s medical director, said the UK would become the first country to trial this AI-enabled early warning system to improve patient care.

CQC chief Sir Julian Hartley added it would strengthen quality monitoring across services.

However, nursing leaders voiced concerns that AI could distract from more urgent needs. Professor Nicola Ranger of the Royal College of Nursing warned that low staffing levels remain a critical issue.

She stressed that one nurse often handles too many patients, and technology should not replace the essential investment in frontline staff.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!