Hexagon unveils AEON humanoid robot powered by NVIDIA to build industrial digital twins

As industries struggle to fill 50 million job vacancies globally, Hexagon has unveiled AEON — a humanoid robot developed in collaboration with NVIDIA — to tackle labour shortages in manufacturing, logistics and beyond.

AEON can perform complex tasks like reality capture, asset inspection and machine operation, thanks to its integration with NVIDIA’s full-stack robotics platform.

By simulating skills using NVIDIA Isaac Sim and training in Isaac Lab, AEON drastically reduced its development time, mastering locomotion in weeks instead of months.

The robot is built using NVIDIA’s trio of AI systems, combining simulation with onboard intelligence powered by Jetson Orin and IGX Thor for real-time navigation and safe collaboration.

AEON will be deployed in factories and warehouses, scanning environments to build high-fidelity digital twins through Hexagon’s cloud-based Reality Cloud Studio and NVIDIA Omniverse.

Hexagon believes AEON can bring digital twins into mainstream use, streamlining industrial workflows through advanced sensor fusion and simulation-first AI. The company is also leveraging synthetic motion data to accelerate robot learning, pushing the boundaries of physical AI for real-world applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT now supports MCP for business data access, but safety risks remain

OpenAI has officially enabled support for Anthropic’s Model Context Protocol (MCP) in ChatGPT, allowing businesses to connect their internal tools directly to the chatbot through Deep Research.

The development enables employees to retrieve company data from previously siloed systems, offering real-time access to documents and search results via custom-built MCP servers.

Adopting MCP — an open industry protocol recently embraced by OpenAI, Google and Microsoft — opens new possibilities and presents security risks.

OpenAI advises users to avoid third-party MCP servers unless hosted by the official service provider, warning that unverified connections may carry prompt injections or hidden malicious directives. Users are urged to report suspicious activity and avoid exposing sensitive data during integration.

To connect tools, developers must set up an MCP server and create a tailored connector within ChatGPT, complete with detailed instructions. The feature is now live for ChatGPT Enterprise, Team and Edu users, who can share the connector across their workspace as a trusted data source.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Is AI distorting our view of the Milky Way’s black hole?

A new AI model has created a fresh image of Sagittarius A*, the supermassive black hole at the centre of our galaxy, suggesting it is spinning close to its maximum speed.

The model was trained on noisy data from the Event Horizon Telescope, a globe-spanning network of radio telescopes, using information once dismissed due to atmospheric interference.

Researchers believe this AI-enhanced image shows the black hole’s rotational axis pointing towards Earth, offering potential insights into how radiation and matter behave near such cosmic giants.

By using previously considered unusable data, scientists hope to improve our understanding of black hole dynamics.

However, not all physicists are confident in the results.

Nobel Prize-winning astrophysicist Reinhard Genzel has voiced concern over the reliability of models built on compromised data, stressing that AI should not be treated as a miracle fix. He warned that the new image might be distorted due to the poor quality of its underlying information.

The researchers plan to test their model against newer and more reliable data to address these concerns. Their goal is to refine the AI further and provide more accurate simulations of black holes in the future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake technology fuels new harassment risks

A growing threat of AI-generated media is reshaping workplace harassment, with deepfakes used to impersonate colleagues and circulate fabricated explicit content in the US. Recent studies found that almost all deepfakes were sexually explicit by 2023, often targeting women.

Organisations risk liability under existing laws if deepfake incidents create hostile work environments. New legislation like the TAKE IT DOWN Act and Florida’s Brooke’s Law now mandates rapid removal of non-consensual intimate imagery.

Employers are also bracing for proposed rules requiring strict authentication of AI-generated evidence in legal proceedings. Industry experts advise an urgent review of harassment and acceptable use policies, clear incident response plans and targeted training for HR, legal and IT teams.

Protective measures include auditing insurance coverage for synthetic media claims and staying abreast of evolving state and federal regulations. Forward-looking employers already embed deepfake awareness into their harassment prevention and cybersecurity training to safeguard workplace dignity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft begins password deletion in six weeks

Microsoft has announced that it will begin deleting saved passwords from its Authenticator app in six weeks, urging users to shift to more secure passkeys. The company confirmed that by August 2025, saved passwords will no longer be accessible, marking a decisive move away from traditional logins.

Users can transition their credentials to Microsoft Edge or adopt passkeys, which are less vulnerable to phishing and breaches. Despite growing risks, Google is making similar recommendations as most users still rely on passwords or outdated two-factor authentication.

The changes reflect a broader industry push to phase out passwords entirely, citing their inherent insecurity and the surge in credential-based attacks. Microsoft also warned that attackers are intensifying efforts to exploit passwords before their relevance fades.

Authenticator will continue supporting passkeys, but users must keep it enabled as their passkey provider. Microsoft’s message is clear: act now to secure your accounts before password support disappears.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

T-Mobile launches priority network for emergency services

T-Mobile is expanding its support for emergency response teams by combining 5G, AI and drone technologies to boost disaster recovery operations. Its T-Priority service, launched last year, offers dedicated network slices to ensure fast, low-latency data access during crises.

US first responders in disaster-hit regions like Southern California and North Carolina have already used the system to operate body cams, traffic monitoring tools and mapping systems. T-Mobile deployed hundreds of 5G routers and hotspot devices to aid efforts during the Palisades wildfire and Hurricanes.

AI and drone technologies are key in reconnaissance, damage assessment and real-time communication. T-Mobile’s self-organising network adapts to changing conditions using live data, ensuring stable connectivity throughout emergency operations.

Public-private collaboration is central to the initiative, with T-Mobile working alongside FEMA, the Department of Defense and local emergency centres. The company has also signed a major deal to provide New York City with a dedicated public safety network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK cyber agency warns AI will accelerate cyber threats by 2027

The UK’s National Cyber Security Centre has warned that integrating AI into national infrastructure creates a broader attack surface, raising concerns about an increased risk of cyber threats.

Its latest report outlines how AI may amplify the capabilities of threat actors, especially when it comes to exploiting known vulnerabilities more rapidly than ever before.

By 2027, AI-enabled tools are expected to shorten the time between vulnerability disclosure and exploitation significantly. The evolution could pose a serious challenge for defenders, particularly within critical systems.

The NCSC notes that the risk of advanced cyber attacks will likely escalate unless organisations can keep pace with so-called ‘frontier AI’.

The centre also predicts a growing ‘digital divide’ between organisations that adapt to AI-driven threats and those left behind. The divide could further endanger the overall cyber resilience of the UK. As a result, decisive action is being urged to close the gap and reduce future risks.

NCSC operations director Paul Chichester said AI is expanding attack surfaces, increasing the volume of threats, and speeding up malicious activity. He emphasised that while these dangers are real, AI can strengthen the UK’s cyber defences.

Organisations are encouraged to adopt robust security practices using resources like the Cyber Assessment Framework, the 10 Steps to Cyber Security, and the new AI Cyber Security Code of Practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google warns against weak passwords amid £12bn scams

Gmail users are being urged to upgrade their security as online scams continue to rise sharply, with cyber criminals stealing over £12 billion in the past year alone. Google is warning that simple passwords leave people vulnerable to phishing and account takeovers.

To combat the threat, users are encouraged to switch to passkeys or use ‘Sign in with Google’, both of which offer stronger protections through fingerprint, face ID or PIN verification. Over 60% of Baby Boomers and Gen X users still rely on weak passwords, increasing their exposure to attacks.

Despite the availability of secure alternatives, only 30% of users reportedly use them daily. Gen Z is leading the shift by adopting newer tools, bypassing outdated security habits altogether.

Google recommends adding 2-Step Verification for those unwilling to leave passwords behind. With scams growing more sophisticated, extra security measures are no longer optional, they are essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Workplace deepfake abuse: What employers must know

Deepfake technology—AI-generated videos, images, and audio—has entered the workplace in alarming ways.

Once difficult to produce, deepfakes are now widely accessible and are being used to harass, impersonate, or intimidate employees. These synthetic media attacks can cause deep psychological harm, damage reputations, and expose employers to serious legal risks.

While US federal law hasn’t yet caught up, new legislation like the Take It Down Act and Florida’s Brooke’s Law require platforms to remove non-consensual deepfake content within 48 hours.

Meanwhile, employers could face claims under existing workplace laws if they fail to act on deepfake harassment. Inaction may lead to lawsuits for creating a hostile environment or for negligent oversight.

Most workplace policies still don’t mention synthetic media and something like this creates blind spots, especially during investigations, where fake images or audio could wrongly influence decisions.

Employers need to shift how they assess evidence and protect both accused and accuser fairly. It’s time to update handbooks, train staff, and build clear response plans that include digital impersonation and deepfake abuse.

By treating deepfakes as a modern form of harassment instead of just a tech issue, organisations can respond faster, protect staff, and maintain trust. Proactive training, updated policies, and legal awareness will be crucial to workplace safety in the age of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anubis ransomware threatens permanent data loss

A new ransomware threat known as Anubis is making waves in the cybersecurity world, combining file encryption with aggressive monetisation tactics and a rare file-wiping feature that prevents data recovery.

Victims discover their files renamed with the .anubis extension and are presented with a ransom note warning that stolen data will be leaked unless payment is made.

What sets Anubis apart is its ability to permanently erase file contents using a command that overwrites them with zero-byte shells. Although the filenames remain, the data inside is lost forever, rendering recovery impossible.

Researchers have flagged the destructive feature as highly unusual for ransomware, typically seen in cyberespionage rather than financially motivated attacks.

The malware also attempts to change the victim’s desktop wallpaper to reinforce the impact, although in current samples, the image file was missing. Anubis spreads through phishing emails and uses tactics like command-line scripting and stolen tokens to escalate privileges and evade defences.

It operates as a ransomware-as-a-service model, meaning less-skilled cybercriminals can rent and use it easily.

Security experts urge organisations to treat Anubis as more than a typical ransomware threat. Besides strong backup practices, firms are advised to improve email security, limit user privileges, and train staff to spot phishing attempts.

As attackers look to profit from stolen access and unrecoverable destruction, prevention becomes the only true line of defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!