Tesla robot learns to cook and clean

Tesla has released a new video showing its Optimus robot performing a variety of domestic tasks, from vacuuming floors to stirring food. Instructed through natural language prompts, the robot handled chores such as cleaning a table, tearing paper towels, and taking out the bin with notable precision.

The development marks another step forward in Tesla’s goal of making humanoid robots useful in everyday settings. The Optimus team claims a breakthrough now allows the robot to learn directly from first-person human videos, accelerating task training compared to traditional methods.

Reinforcement learning is also being used to help Optimus refine its skills through trial and error in simulations or the real world. Tesla hopes to eventually deploy thousands of these robots in its factories to perform repetitive or hazardous jobs.

While still far from superhuman, Optimus’s progress highlights how Tesla is positioning itself in the race to commercialise humanoid robots. Competitors around the world are also developing robots for work and home environments, aiming to reshape how humans interact with machines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Epic Games wins long battle with Apple

Fortnite has returned to the Apple app store in the US, nearly five years after it was removed in 2020. The ban followed Epic Games’ attempt to bypass Apple’s 30% commission by introducing its own payment system, sparking a major legal fight.

The game is now also available on the Epic Games Store and AltStore in the EU. This development is being widely viewed as a win for Epic Games in its lengthy dispute over app store practices.

Analysts say it may shift power dynamics in distribution, giving creators more influence against platform holders.

The US return comes just days after Fortnite was briefly unavailable globally due to a blocked update. It had already reappeared in the EU earlier this year due to new competition laws. With over 400 million players worldwide, Fortnite remains one of the most popular games in the world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware threat evolves with deceptive PDFs

Ransomware attacks fell by 31% in April 2025 compared to the previous month. Despite the overall decline, the retail sector remained a top target, with incidents at Marks & Spencer, Co-op, Harrods and Peter Green Chilled drawing national attention.

Retail remains vulnerable due to its public profile and potential for large-scale disruption. Experts warn the drop in figures does not reflect a weaker threat, as many attacks go unreported or are deliberately concealed.

Tactics are shifting, with some groups, like Babuk 2.0, faking claims to gain notoriety or extort victims. A rising threat in the ransomware landscape is the use of malicious PDF files, which now make up over a fifth of email-based malware.

These files, increasingly crafted using generative AI, are trusted more by users and harder to detect. Cybersecurity experts are urging firms to update defences and strengthen organisational security cultures to remain resilient.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google brings sign language translation to AI

Google has introduced Gemma 3n, an advanced AI model that can operate directly on mobile devices, laptops, and tablets without relying on the cloud. The company also revealed MedGemma, its most powerful open AI model for analysing medical images and text.

The model supports processing audio, text, images, and video, and is built to perform well even on devices with less than 2GB of RAM. It shares its architecture with Gemini Nano and is now available in preview.

MedGemma is part of Google’s Health AI Developer Foundations programme and is designed to help developers create custom health-focused applications. It promises wide-ranging usability in multimodal healthcare tasks.

Another model, SignGemma, was announced to aid in translating sign language into spoken text. Despite concerns over Gemma’s licensing, the models continue to see widespread adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts urge stronger safeguards as jailbroken chatbots leak illegal data

Hacked AI-powered chatbots pose serious security risks by revealing illicit knowledge the models absorbed during training, according to researchers at Ben Gurion University.

Their study highlights how ‘jailbroken’ large language models (LLMs) can be manipulated to produce dangerous instructions, such as how to hack networks, manufacture drugs, or carry out other illegal activities.

The chatbots, including those powered by models from companies like OpenAI, Google, and Anthropic, are trained on vast internet data sets. While attempts are made to exclude harmful material, AI systems may still internalize sensitive information.

Safety controls are meant to block the release of this knowledge, but researchers demonstrated how it could be bypassed using specially crafted prompts.

The researchers developed a ‘universal jailbreak’ capable of compromising multiple leading LLMs. Once bypassed, the chatbots consistently responded to queries that should have triggered safeguards.

They found some AI models openly advertised online as ‘dark LLMs,’ designed without ethical constraints and willing to generate responses that support fraud or cybercrime.

Professor Lior Rokach and Dr Michael Fire, who led the research, said the growing accessibility of this technology lowers the barrier for malicious use. They warned that dangerous knowledge could soon be accessed by anyone with a laptop or phone.

Despite notifying AI providers about the jailbreak method, the researchers say the response was underwhelming. Some companies dismissed the concerns as outside the scope of bug bounty programs, while others did not respond.

The report calls on tech companies to improve their models’ security by screening training data, using advanced firewalls, and developing methods for machine ‘unlearning’ to help remove illicit content. Experts also called for clearer safety standards and independent oversight.

OpenAI said its latest models have improved resilience to jailbreaks, and Microsoft linked to its recent safety initiatives. Other companies have not yet commented.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft and GitHub back Anthropic’s MCP

Microsoft and GitHub are officially joining the steering committee for MCP, a growing standard developed by Anthropic that connects AI models with data systems.

The announcement came during Microsoft’s Build 2025 event, highlighting a new phase of industry-wide backing for the protocol, which already has support from OpenAI and Google.

MCP allows developers to link AI systems with apps, business tools, and software environments using MCP servers and clients. Instead of AI models working in isolation, they can interact directly with sources like content repositories or app features to complete tasks and power tools like chatbots.

Microsoft plans to integrate MCP into its core platforms, including Azure and Windows 11. Soon, developers will be able to expose app functionalities, such as file access or Linux subsystems, as MCP servers, enabling AI models to use them securely.

GitHub and Microsoft are also contributing updates to the MCP standard itself, including a registry for server discovery and a new authorisation system to manage secure connections.

The broader goal is to let developers build smarter AI-powered applications by making it easier to plug into real-world data and tools, while maintaining strong control over access and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK research body hit by 5 million cyber attacks

UK Research and Innovation (UKRI), the country’s national funding body for science and research, has reported a staggering 5.4 million cyber attacks this year — a sixfold increase compared to the previous year.

According to data obtained through freedom of information requests, the majority of these threats were phishing attempts, with 236,400 designed to trick employees into revealing sensitive data. A further 11,200 were malware-based attacks, while the rest were identified as spam or malicious emails.

The scale of these incidents highlights the growing threat faced by both public and private sector institutions. Experts believe the rise of AI has enabled cybercriminals to launch more frequent and sophisticated attacks.

Rick Boyce, chief for technology at AND Digital, warned that the emergence of AI has introduced threats ‘at a pace we’ve never seen before’, calling for a move beyond traditional defences to stay ahead of evolving risks.

UKRI, which is sponsored by the Department for Science, Innovation and Technology, manages an annual budget of £8 billion, much of it invested in cutting-edge research.

A budget like this makes it an attractive target for cybercriminals and state-sponsored actors alike, particularly those looking to steal intellectual property or sabotage infrastructure. Security experts suggest the scale and nature of the attacks point to involvement from hostile nation states, with Russia a likely culprit.

Though UKRI cautioned that differing reporting periods may affect the accuracy of year-on-year comparisons, there is little doubt about the severity of the threat.

The UK’s National Cyber Security Centre (NCSC) has previously warned of Russia’s Unit 29155 targeting British government bodies and infrastructure for espionage and disruption.

With other notorious groups such as Fancy Bear and Sandworm also active, the cybersecurity landscape is becoming increasingly fraught.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ascension faces fresh data breach fallout

A major cybersecurity breach has struck Ascension, one of the largest nonprofit healthcare systems in the US, exposing the sensitive information of over 430,000 patients.

The incident began in December 2024, when Ascension discovered that patient data had been compromised through a former business partner’s software flaw.

The indirect breach allowed cybercriminals to siphon off a wide range of personal, medical and financial details — including Social Security numbers, diagnosis codes, hospital admission records and insurance data.

The breach adds to growing concerns over the healthcare industry’s vulnerability to cyberattacks. In 2024 alone, 1,160 healthcare-related data breaches were reported, affecting 305 million records — a sharp rise from the previous year.

Many institutions still treat cybersecurity as an afterthought instead of a core responsibility, despite handling highly valuable and sensitive data.

Ascension itself has been targeted multiple times, including a ransomware attack in May 2024 that disrupted services at dozens of hospitals and affected nearly 5.6 million individuals.

Ascension has since filed notices with regulators and is offering two years of identity monitoring to those impacted. However, critics argue this response is inadequate and reflects a broader pattern of negligence across the sector.

The company has not named the third-party vendor responsible, but experts believe the incident may be tied to a larger ransomware campaign that exploited flaws in widely used file-transfer software.

Rather than treating such incidents as isolated, experts warn that these breaches highlight systemic flaws in healthcare’s digital infrastructure. As criminals grow more sophisticated and vendors remain vulnerable, patients bear the consequences.

Until healthcare providers prioritise cybersecurity instead of cutting corners, breaches like this are likely to become even more common — and more damaging.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jersey artists push back against AI art

A Jersey illustrator has spoken out against the growing use of AI-generated images, calling the trend ‘heartbreaking’ for artists who fear losing their livelihoods to technology.

Abi Overland, known for her intricate hand-drawn illustrations, said it was deeply concerning to see AI-created visuals being shared online without acknowledging their impact on human creators.

She warned that AI systems often rely on artists’ existing work for training, raising serious questions about copyright and fairness.

Overland stressed that these images are not simply a product of new tools but of years of human experience and emotion, something AI cannot replicate. She believes the increasing normalisation of AI content is dangerous and could discourage aspiring artists from entering the field.

Fellow Jersey illustrator Jamie Willow echoed the concern, saying many local companies are already replacing human work with AI outputs, undermining the value of art created with genuine emotional connection and moral integrity.

However, not everyone sees AI as a threat. Sebastian Lawson of Digital Jersey argued that artists could instead use AI to enhance their creativity rather than replace it. He insisted that human creators would always have an edge thanks to their unique insight and ability to convey meaning through their work.

The debate comes as the House of Lords recently blocked the UK government’s data bill for a second time, demanding stronger protections for artists and musicians against AI misuse.

Meanwhile, government officials have said they will not consider any copyright changes unless they are sure such moves would benefit creators as well as tech companies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chicago Sun-Times under fire for fake summer guide

The Chicago Sun-Times has come under scrutiny after its 18 May issue featured a summer guide riddled with fake books, quotes, and experts, many of which appear to have been generated by AI.

Among genuine titles like Call Me By Your Name, readers encountered fictional works wrongly attributed to real authors, such as Min Jin Lee and Rebecca Makkai. The guide also cited individuals who do not appear to exist, including a professor at the University of Colorado and a food anthropologist at Cornell.

Although the guide carried the Sun-Times logo, the newspaper claims it wasn’t written or approved by its editorial team. It stated that the section had been licensed from a national content partner, reportedly Hearst, and is now being removed from digital editions.

Victor Lim, the senior director of audience development, said the paper is investigating how the content was published and is working to update policies to ensure third-party material aligns with newsroom standards.

Several stories in the guide lack bylines or feature names linked to questionable content. Marco Buscaglia, credited for one piece, admitted to using AI ‘for background’ but failed to verify the sources this time, calling the oversight ‘completely embarrassing.’

The incident echoes similar controversies at other media outlets where AI-generated material has been presented alongside legitimate reporting. Even when such content originates from third-party providers, the blurred line between verified journalism and fabricated stories continues to erode reader trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!