Ryuk ransomware hacker extradited to US after arrest in Ukraine

A key member of the infamous Ryuk ransomware gang has been extradited to the US after his arrest in Kyiv, Ukraine.

The 33-year-old man was detained in April 2025 at the request of the FBI and arrived in the US on 18 June to face multiple charges.

The suspect played a critical role within Ryuk by gaining initial access to corporate networks, which he then passed on to accomplices who stole data and launched ransomware attacks.

Ukrainian authorities identified him during a larger investigation into ransomware groups like LockerGoga, Dharma, Hive, and MegaCortex that targeted companies across Europe and North America.

According to Ukraine’s National Police, forensic analysis revealed the man’s responsibility for locating security flaws in enterprise networks.

Information gathered by the hacker allowed others in the gang to infiltrate systems, steal data, and deploy ransomware payloads that disrupted various industries, including healthcare, during the COVID pandemic.

Ryuk operated from 2018 until mid-2020 before rebranding as the notorious Conti gang, which later fractured into several smaller but still active groups. Researchers estimate that Ryuk alone collected over $150 million in ransom payments before shutting down.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Orange, AFD, and Proparco unite for inclusive and sustainable digital growth

Orange, AFD Group, and Proparco have signed a three-year agreement to accelerate digital inclusion and promote sustainable development across 20 countries, primarily in Africa and the Middle East. The partnership will focus on deploying high-speed digital infrastructure, including network backbones and submarine cables, to address connectivity gaps in underserved and rural regions.

That initiative responds to stark disparities in internet access, with only 37% of Sub-Saharan Africa connected compared to over 91% in Europe. Beyond infrastructure, the partnership focuses on improving access to essential digital services in key sectors such as agriculture, healthcare, and education, while also promoting financial and energy inclusion to reduce inequalities and empower remote communities.

A major priority is supporting youth and fostering local innovation through programs that provide digital skills training and professional integration opportunities, enabling young people to participate actively in the digital economy. At the same time, the initiative aims to build vibrant entrepreneurship ecosystems so that communities can become creators, not just consumers, of technology.

Environmental sustainability and ethical responsibility are also at the heart of the collaboration, with strong commitments to reducing the digital sector’s ecological footprint and ensuring responsible practices in areas like data use, cybersecurity, and AI. The partnership seeks to embed inclusivity, innovation, and sustainability into the digital transformation process.

That partnership reflects a shared goal of using digital technology to promote equality and sustainable development, focusing on sovereign, innovative, and locally driven digital services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hexagon unveils AEON humanoid robot powered by NVIDIA to build industrial digital twins

As industries struggle to fill 50 million job vacancies globally, Hexagon has unveiled AEON — a humanoid robot developed in collaboration with NVIDIA — to tackle labour shortages in manufacturing, logistics and beyond.

AEON can perform complex tasks like reality capture, asset inspection and machine operation, thanks to its integration with NVIDIA’s full-stack robotics platform.

By simulating skills using NVIDIA Isaac Sim and training in Isaac Lab, AEON drastically reduced its development time, mastering locomotion in weeks instead of months.

The robot is built using NVIDIA’s trio of AI systems, combining simulation with onboard intelligence powered by Jetson Orin and IGX Thor for real-time navigation and safe collaboration.

AEON will be deployed in factories and warehouses, scanning environments to build high-fidelity digital twins through Hexagon’s cloud-based Reality Cloud Studio and NVIDIA Omniverse.

Hexagon believes AEON can bring digital twins into mainstream use, streamlining industrial workflows through advanced sensor fusion and simulation-first AI. The company is also leveraging synthetic motion data to accelerate robot learning, pushing the boundaries of physical AI for real-world applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT now supports MCP for business data access, but safety risks remain

OpenAI has officially enabled support for Anthropic’s Model Context Protocol (MCP) in ChatGPT, allowing businesses to connect their internal tools directly to the chatbot through Deep Research.

The development enables employees to retrieve company data from previously siloed systems, offering real-time access to documents and search results via custom-built MCP servers.

Adopting MCP — an open industry protocol recently embraced by OpenAI, Google and Microsoft — opens new possibilities and presents security risks.

OpenAI advises users to avoid third-party MCP servers unless hosted by the official service provider, warning that unverified connections may carry prompt injections or hidden malicious directives. Users are urged to report suspicious activity and avoid exposing sensitive data during integration.

To connect tools, developers must set up an MCP server and create a tailored connector within ChatGPT, complete with detailed instructions. The feature is now live for ChatGPT Enterprise, Team and Edu users, who can share the connector across their workspace as a trusted data source.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta offers $100M bonuses to poach OpenAI talent but Altman defends mission-driven culture

Meta has reportedly attempted to lure top talent from OpenAI with signing bonuses exceeding $100 million, according to OpenAI’s CEO Sam Altman.

Speaking on a podcast hosted by his brother, Jack Altman, he revealed that Meta has offered extremely high compensation to key OpenAI staff, yet none have accepted the offers.

Meta CEO Mark Zuckerberg is said to be directly involved in recruiting for a new ‘superintelligence’ team as part of the latest AI push.

The tech giant recently announced a $14.3 billion investment in Scale AI and brought Scale’s CEO, Alexandr Wang, on board. Altman believes Meta sees ChatGPT not only as competition for Google but as a potential rival to Facebook regarding user attention.

Altman questioned whether such high-compensation strategies foster the right environment, suggesting that culture cannot be built on upfront financial incentives alone.

He stressed that OpenAI prefers aligning rewards with its mission instead of offering massive pay packets. In his view, sustainable innovation stems from purpose, not payouts.

While recognising Meta’s persistence in the AI race, Altman suggested that the company will likely try again if the current effort fails. He highlighted a cultural difference, saying OpenAI has built a team focused on consistent innovation — something he believes Meta still struggles to understand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Is AI distorting our view of the Milky Way’s black hole?

A new AI model has created a fresh image of Sagittarius A*, the supermassive black hole at the centre of our galaxy, suggesting it is spinning close to its maximum speed.

The model was trained on noisy data from the Event Horizon Telescope, a globe-spanning network of radio telescopes, using information once dismissed due to atmospheric interference.

Researchers believe this AI-enhanced image shows the black hole’s rotational axis pointing towards Earth, offering potential insights into how radiation and matter behave near such cosmic giants.

By using previously considered unusable data, scientists hope to improve our understanding of black hole dynamics.

However, not all physicists are confident in the results.

Nobel Prize-winning astrophysicist Reinhard Genzel has voiced concern over the reliability of models built on compromised data, stressing that AI should not be treated as a miracle fix. He warned that the new image might be distorted due to the poor quality of its underlying information.

The researchers plan to test their model against newer and more reliable data to address these concerns. Their goal is to refine the AI further and provide more accurate simulations of black holes in the future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK cyber agency warns AI will accelerate cyber threats by 2027

The UK’s National Cyber Security Centre has warned that integrating AI into national infrastructure creates a broader attack surface, raising concerns about an increased risk of cyber threats.

Its latest report outlines how AI may amplify the capabilities of threat actors, especially when it comes to exploiting known vulnerabilities more rapidly than ever before.

By 2027, AI-enabled tools are expected to shorten the time between vulnerability disclosure and exploitation significantly. The evolution could pose a serious challenge for defenders, particularly within critical systems.

The NCSC notes that the risk of advanced cyber attacks will likely escalate unless organisations can keep pace with so-called ‘frontier AI’.

The centre also predicts a growing ‘digital divide’ between organisations that adapt to AI-driven threats and those left behind. The divide could further endanger the overall cyber resilience of the UK. As a result, decisive action is being urged to close the gap and reduce future risks.

NCSC operations director Paul Chichester said AI is expanding attack surfaces, increasing the volume of threats, and speeding up malicious activity. He emphasised that while these dangers are real, AI can strengthen the UK’s cyber defences.

Organisations are encouraged to adopt robust security practices using resources like the Cyber Assessment Framework, the 10 Steps to Cyber Security, and the new AI Cyber Security Code of Practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s robotics industry set to double by 2028, led by drones and humanoid robots

China’s robotics industry is on course to double in size by 2028, with Morgan Stanley projecting market growth from US$47 billion in 2024 to US$108 billion.

With an annual expansion rate of 23 percent, the country is expected to strengthen its leadership in this fast-evolving field. Analysts credit China’s drive for innovation and cost efficiency as key to advancing next-generation robotics.

A cornerstone of the ‘Made in China 2025’ initiative, robotics is central to the nation’s goal of dominating global high-tech industries. Last year, China accounted for 40 percent of the worldwide robotics market and over half of all industrial robot installations.

Recent data shows industrial robot production surged 35.5 percent in May, while service robot output climbed nearly 14 percent.

Morgan Stanley anticipates drones will remain China’s largest robotics segment, set to grow from US$19 billion to US$40 billion by 2028.

Meanwhile, the humanoid robot sector is expected to see an annual growth rate of 63 percent, expanding from US$300 million in 2025 to US$3.4 billion by 2030. By 2050, China could be home to 302 million humanoid robots, making up 30 percent of the global population.

The researchers describe 2025 as a milestone year, marking the start of mass humanoid robot production.

They emphasise that automation is already reshaping China’s manufacturing industry, boosting productivity and quality instead of simply replacing workers and setting the stage for a brighter industrial future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Workplace deepfake abuse: What employers must know

Deepfake technology—AI-generated videos, images, and audio—has entered the workplace in alarming ways.

Once difficult to produce, deepfakes are now widely accessible and are being used to harass, impersonate, or intimidate employees. These synthetic media attacks can cause deep psychological harm, damage reputations, and expose employers to serious legal risks.

While US federal law hasn’t yet caught up, new legislation like the Take It Down Act and Florida’s Brooke’s Law require platforms to remove non-consensual deepfake content within 48 hours.

Meanwhile, employers could face claims under existing workplace laws if they fail to act on deepfake harassment. Inaction may lead to lawsuits for creating a hostile environment or for negligent oversight.

Most workplace policies still don’t mention synthetic media and something like this creates blind spots, especially during investigations, where fake images or audio could wrongly influence decisions.

Employers need to shift how they assess evidence and protect both accused and accuser fairly. It’s time to update handbooks, train staff, and build clear response plans that include digital impersonation and deepfake abuse.

By treating deepfakes as a modern form of harassment instead of just a tech issue, organisations can respond faster, protect staff, and maintain trust. Proactive training, updated policies, and legal awareness will be crucial to workplace safety in the age of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anubis ransomware threatens permanent data loss

A new ransomware threat known as Anubis is making waves in the cybersecurity world, combining file encryption with aggressive monetisation tactics and a rare file-wiping feature that prevents data recovery.

Victims discover their files renamed with the .anubis extension and are presented with a ransom note warning that stolen data will be leaked unless payment is made.

What sets Anubis apart is its ability to permanently erase file contents using a command that overwrites them with zero-byte shells. Although the filenames remain, the data inside is lost forever, rendering recovery impossible.

Researchers have flagged the destructive feature as highly unusual for ransomware, typically seen in cyberespionage rather than financially motivated attacks.

The malware also attempts to change the victim’s desktop wallpaper to reinforce the impact, although in current samples, the image file was missing. Anubis spreads through phishing emails and uses tactics like command-line scripting and stolen tokens to escalate privileges and evade defences.

It operates as a ransomware-as-a-service model, meaning less-skilled cybercriminals can rent and use it easily.

Security experts urge organisations to treat Anubis as more than a typical ransomware threat. Besides strong backup practices, firms are advised to improve email security, limit user privileges, and train staff to spot phishing attempts.

As attackers look to profit from stolen access and unrecoverable destruction, prevention becomes the only true line of defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!