Quantum computing partnership launches in Doha

Quantinuum and Al Rabban Capital have announced a new venture aimed at advancing quantum computing in Qatar and the region.

The partnership seeks to provide access to Quantinuum’s technologies, co-develop relevant quantum applications and train a new generation of developers.

This move aligns with Qatar’s ambition to become a hub for advanced technologies. Applications will focus on energy, medicine, genomics, and finance, with additional potential in emerging fields like Generative Quantum AI.

The venture builds on existing collaborations with Hamad Bin Khalifa University and the Qatar Center for Quantum Computing. Quantinuum’s expansion into Qatar follows growth across the US, UK, Europe, and Indo-Pacific.

Leaders from both organisations see this as a strategic milestone, strengthening technological ties between Qatar and the West. The joint venture not only supports national goals but also reflects rising global demand for quantum technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

West Lothian schools hit by ransomware attack

West Lothian Council has confirmed that personal and sensitive information was stolen following a ransomware cyberattack which struck the region’s education system on Tuesday, 6 May. Police Scotland has launched an investigation, and the matter remains an active criminal case.

Only a small fraction of the data held on the education network was accessed by the attackers. However, some of it included sensitive personal information. Parents and carers across West Lothian’s schools have been notified, and staff have also been advised to take extra precautions.

The cyberattack disrupted IT systems serving 13 secondary schools, 69 primary schools and 61 nurseries. Although the education network remains isolated from the rest of the council’s systems, contingency plans have been effective in minimising disruption, including during the ongoing SQA exams.

West Lothian Council has apologised to anyone potentially affected. It is continuing to work closely with Police Scotland and the Scottish Government. Officials have promised further updates as more information becomes available.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tesla robot learns to cook and clean

Tesla has released a new video showing its Optimus robot performing a variety of domestic tasks, from vacuuming floors to stirring food. Instructed through natural language prompts, the robot handled chores such as cleaning a table, tearing paper towels, and taking out the bin with notable precision.

The development marks another step forward in Tesla’s goal of making humanoid robots useful in everyday settings. The Optimus team claims a breakthrough now allows the robot to learn directly from first-person human videos, accelerating task training compared to traditional methods.

Reinforcement learning is also being used to help Optimus refine its skills through trial and error in simulations or the real world. Tesla hopes to eventually deploy thousands of these robots in its factories to perform repetitive or hazardous jobs.

While still far from superhuman, Optimus’s progress highlights how Tesla is positioning itself in the race to commercialise humanoid robots. Competitors around the world are also developing robots for work and home environments, aiming to reshape how humans interact with machines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Epic Games wins long battle with Apple

Fortnite has returned to the Apple app store in the US, nearly five years after it was removed in 2020. The ban followed Epic Games’ attempt to bypass Apple’s 30% commission by introducing its own payment system, sparking a major legal fight.

The game is now also available on the Epic Games Store and AltStore in the EU. This development is being widely viewed as a win for Epic Games in its lengthy dispute over app store practices.

Analysts say it may shift power dynamics in distribution, giving creators more influence against platform holders.

The US return comes just days after Fortnite was briefly unavailable globally due to a blocked update. It had already reappeared in the EU earlier this year due to new competition laws. With over 400 million players worldwide, Fortnite remains one of the most popular games in the world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware threat evolves with deceptive PDFs

Ransomware attacks fell by 31% in April 2025 compared to the previous month. Despite the overall decline, the retail sector remained a top target, with incidents at Marks & Spencer, Co-op, Harrods and Peter Green Chilled drawing national attention.

Retail remains vulnerable due to its public profile and potential for large-scale disruption. Experts warn the drop in figures does not reflect a weaker threat, as many attacks go unreported or are deliberately concealed.

Tactics are shifting, with some groups, like Babuk 2.0, faking claims to gain notoriety or extort victims. A rising threat in the ransomware landscape is the use of malicious PDF files, which now make up over a fifth of email-based malware.

These files, increasingly crafted using generative AI, are trusted more by users and harder to detect. Cybersecurity experts are urging firms to update defences and strengthen organisational security cultures to remain resilient.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google brings sign language translation to AI

Google has introduced Gemma 3n, an advanced AI model that can operate directly on mobile devices, laptops, and tablets without relying on the cloud. The company also revealed MedGemma, its most powerful open AI model for analysing medical images and text.

The model supports processing audio, text, images, and video, and is built to perform well even on devices with less than 2GB of RAM. It shares its architecture with Gemini Nano and is now available in preview.

MedGemma is part of Google’s Health AI Developer Foundations programme and is designed to help developers create custom health-focused applications. It promises wide-ranging usability in multimodal healthcare tasks.

Another model, SignGemma, was announced to aid in translating sign language into spoken text. Despite concerns over Gemma’s licensing, the models continue to see widespread adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts urge stronger safeguards as jailbroken chatbots leak illegal data

Hacked AI-powered chatbots pose serious security risks by revealing illicit knowledge the models absorbed during training, according to researchers at Ben Gurion University.

Their study highlights how ‘jailbroken’ large language models (LLMs) can be manipulated to produce dangerous instructions, such as how to hack networks, manufacture drugs, or carry out other illegal activities.

The chatbots, including those powered by models from companies like OpenAI, Google, and Anthropic, are trained on vast internet data sets. While attempts are made to exclude harmful material, AI systems may still internalize sensitive information.

Safety controls are meant to block the release of this knowledge, but researchers demonstrated how it could be bypassed using specially crafted prompts.

The researchers developed a ‘universal jailbreak’ capable of compromising multiple leading LLMs. Once bypassed, the chatbots consistently responded to queries that should have triggered safeguards.

They found some AI models openly advertised online as ‘dark LLMs,’ designed without ethical constraints and willing to generate responses that support fraud or cybercrime.

Professor Lior Rokach and Dr Michael Fire, who led the research, said the growing accessibility of this technology lowers the barrier for malicious use. They warned that dangerous knowledge could soon be accessed by anyone with a laptop or phone.

Despite notifying AI providers about the jailbreak method, the researchers say the response was underwhelming. Some companies dismissed the concerns as outside the scope of bug bounty programs, while others did not respond.

The report calls on tech companies to improve their models’ security by screening training data, using advanced firewalls, and developing methods for machine ‘unlearning’ to help remove illicit content. Experts also called for clearer safety standards and independent oversight.

OpenAI said its latest models have improved resilience to jailbreaks, and Microsoft linked to its recent safety initiatives. Other companies have not yet commented.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft and GitHub back Anthropic’s MCP

Microsoft and GitHub are officially joining the steering committee for MCP, a growing standard developed by Anthropic that connects AI models with data systems.

The announcement came during Microsoft’s Build 2025 event, highlighting a new phase of industry-wide backing for the protocol, which already has support from OpenAI and Google.

MCP allows developers to link AI systems with apps, business tools, and software environments using MCP servers and clients. Instead of AI models working in isolation, they can interact directly with sources like content repositories or app features to complete tasks and power tools like chatbots.

Microsoft plans to integrate MCP into its core platforms, including Azure and Windows 11. Soon, developers will be able to expose app functionalities, such as file access or Linux subsystems, as MCP servers, enabling AI models to use them securely.

GitHub and Microsoft are also contributing updates to the MCP standard itself, including a registry for server discovery and a new authorisation system to manage secure connections.

The broader goal is to let developers build smarter AI-powered applications by making it easier to plug into real-world data and tools, while maintaining strong control over access and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK research body hit by 5 million cyber attacks

UK Research and Innovation (UKRI), the country’s national funding body for science and research, has reported a staggering 5.4 million cyber attacks this year — a sixfold increase compared to the previous year.

According to data obtained through freedom of information requests, the majority of these threats were phishing attempts, with 236,400 designed to trick employees into revealing sensitive data. A further 11,200 were malware-based attacks, while the rest were identified as spam or malicious emails.

The scale of these incidents highlights the growing threat faced by both public and private sector institutions. Experts believe the rise of AI has enabled cybercriminals to launch more frequent and sophisticated attacks.

Rick Boyce, chief for technology at AND Digital, warned that the emergence of AI has introduced threats ‘at a pace we’ve never seen before’, calling for a move beyond traditional defences to stay ahead of evolving risks.

UKRI, which is sponsored by the Department for Science, Innovation and Technology, manages an annual budget of £8 billion, much of it invested in cutting-edge research.

A budget like this makes it an attractive target for cybercriminals and state-sponsored actors alike, particularly those looking to steal intellectual property or sabotage infrastructure. Security experts suggest the scale and nature of the attacks point to involvement from hostile nation states, with Russia a likely culprit.

Though UKRI cautioned that differing reporting periods may affect the accuracy of year-on-year comparisons, there is little doubt about the severity of the threat.

The UK’s National Cyber Security Centre (NCSC) has previously warned of Russia’s Unit 29155 targeting British government bodies and infrastructure for espionage and disruption.

With other notorious groups such as Fancy Bear and Sandworm also active, the cybersecurity landscape is becoming increasingly fraught.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ascension faces fresh data breach fallout

A major cybersecurity breach has struck Ascension, one of the largest nonprofit healthcare systems in the US, exposing the sensitive information of over 430,000 patients.

The incident began in December 2024, when Ascension discovered that patient data had been compromised through a former business partner’s software flaw.

The indirect breach allowed cybercriminals to siphon off a wide range of personal, medical and financial details — including Social Security numbers, diagnosis codes, hospital admission records and insurance data.

The breach adds to growing concerns over the healthcare industry’s vulnerability to cyberattacks. In 2024 alone, 1,160 healthcare-related data breaches were reported, affecting 305 million records — a sharp rise from the previous year.

Many institutions still treat cybersecurity as an afterthought instead of a core responsibility, despite handling highly valuable and sensitive data.

Ascension itself has been targeted multiple times, including a ransomware attack in May 2024 that disrupted services at dozens of hospitals and affected nearly 5.6 million individuals.

Ascension has since filed notices with regulators and is offering two years of identity monitoring to those impacted. However, critics argue this response is inadequate and reflects a broader pattern of negligence across the sector.

The company has not named the third-party vendor responsible, but experts believe the incident may be tied to a larger ransomware campaign that exploited flaws in widely used file-transfer software.

Rather than treating such incidents as isolated, experts warn that these breaches highlight systemic flaws in healthcare’s digital infrastructure. As criminals grow more sophisticated and vendors remain vulnerable, patients bear the consequences.

Until healthcare providers prioritise cybersecurity instead of cutting corners, breaches like this are likely to become even more common — and more damaging.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!