Hexagon unveils AEON humanoid robot powered by NVIDIA to build industrial digital twins

As industries struggle to fill 50 million job vacancies globally, Hexagon has unveiled AEON — a humanoid robot developed in collaboration with NVIDIA — to tackle labour shortages in manufacturing, logistics and beyond.

AEON can perform complex tasks like reality capture, asset inspection and machine operation, thanks to its integration with NVIDIA’s full-stack robotics platform.

By simulating skills using NVIDIA Isaac Sim and training in Isaac Lab, AEON drastically reduced its development time, mastering locomotion in weeks instead of months.

The robot is built using NVIDIA’s trio of AI systems, combining simulation with onboard intelligence powered by Jetson Orin and IGX Thor for real-time navigation and safe collaboration.

AEON will be deployed in factories and warehouses, scanning environments to build high-fidelity digital twins through Hexagon’s cloud-based Reality Cloud Studio and NVIDIA Omniverse.

Hexagon believes AEON can bring digital twins into mainstream use, streamlining industrial workflows through advanced sensor fusion and simulation-first AI. The company is also leveraging synthetic motion data to accelerate robot learning, pushing the boundaries of physical AI for real-world applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT now supports MCP for business data access, but safety risks remain

OpenAI has officially enabled support for Anthropic’s Model Context Protocol (MCP) in ChatGPT, allowing businesses to connect their internal tools directly to the chatbot through Deep Research.

The development enables employees to retrieve company data from previously siloed systems, offering real-time access to documents and search results via custom-built MCP servers.

Adopting MCP — an open industry protocol recently embraced by OpenAI, Google and Microsoft — opens new possibilities and presents security risks.

OpenAI advises users to avoid third-party MCP servers unless hosted by the official service provider, warning that unverified connections may carry prompt injections or hidden malicious directives. Users are urged to report suspicious activity and avoid exposing sensitive data during integration.

To connect tools, developers must set up an MCP server and create a tailored connector within ChatGPT, complete with detailed instructions. The feature is now live for ChatGPT Enterprise, Team and Edu users, who can share the connector across their workspace as a trusted data source.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta offers $100M bonuses to poach OpenAI talent but Altman defends mission-driven culture

Meta has reportedly attempted to lure top talent from OpenAI with signing bonuses exceeding $100 million, according to OpenAI’s CEO Sam Altman.

Speaking on a podcast hosted by his brother, Jack Altman, he revealed that Meta has offered extremely high compensation to key OpenAI staff, yet none have accepted the offers.

Meta CEO Mark Zuckerberg is said to be directly involved in recruiting for a new ‘superintelligence’ team as part of the latest AI push.

The tech giant recently announced a $14.3 billion investment in Scale AI and brought Scale’s CEO, Alexandr Wang, on board. Altman believes Meta sees ChatGPT not only as competition for Google but as a potential rival to Facebook regarding user attention.

Altman questioned whether such high-compensation strategies foster the right environment, suggesting that culture cannot be built on upfront financial incentives alone.

He stressed that OpenAI prefers aligning rewards with its mission instead of offering massive pay packets. In his view, sustainable innovation stems from purpose, not payouts.

While recognising Meta’s persistence in the AI race, Altman suggested that the company will likely try again if the current effort fails. He highlighted a cultural difference, saying OpenAI has built a team focused on consistent innovation — something he believes Meta still struggles to understand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI brings DALL-E image creation to WhatsApp users worldwide

OpenAI has officially launched image creation capabilities for WhatsApp users, expanding access to its AI visual tools via the verified number +1-800-ChatGPT. Using natural language prompts, the feature enables users to generate or edit images directly within their chats.

Previously limited to the web and mobile versions of ChatGPT, the image generation tool—powered by DALL-E—is now available globally on WhatsApp, free of charge. OpenAI announced the rollout via X, encouraging users to connect their accounts for enhanced functionality.

To get started, users should save +1-800-ChatGPT (+1-800-242-8478) to their contacts, send ‘Hi’ via WhatsApp, and follow the instructions to link their OpenAI account.

Once verified, they can prompt the AI with creative requests such as ‘design a futuristic skyline’ or ‘show a dog surfing on Mars’ and receive bespoke visuals in return.

The move further integrates generative AI into everyday messaging, making powerful image-creation tools more accessible to a broad user base.

Meanwhile, WhatsApp is preparing to introduce in-app advertising. With over two billion active users, Meta plans to monetise the platform more aggressively—signalling a notable shift in WhatsApp’s strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK cyber agency warns AI will accelerate cyber threats by 2027

The UK’s National Cyber Security Centre has warned that integrating AI into national infrastructure creates a broader attack surface, raising concerns about an increased risk of cyber threats.

Its latest report outlines how AI may amplify the capabilities of threat actors, especially when it comes to exploiting known vulnerabilities more rapidly than ever before.

By 2027, AI-enabled tools are expected to shorten the time between vulnerability disclosure and exploitation significantly. The evolution could pose a serious challenge for defenders, particularly within critical systems.

The NCSC notes that the risk of advanced cyber attacks will likely escalate unless organisations can keep pace with so-called ‘frontier AI’.

The centre also predicts a growing ‘digital divide’ between organisations that adapt to AI-driven threats and those left behind. The divide could further endanger the overall cyber resilience of the UK. As a result, decisive action is being urged to close the gap and reduce future risks.

NCSC operations director Paul Chichester said AI is expanding attack surfaces, increasing the volume of threats, and speeding up malicious activity. He emphasised that while these dangers are real, AI can strengthen the UK’s cyber defences.

Organisations are encouraged to adopt robust security practices using resources like the Cyber Assessment Framework, the 10 Steps to Cyber Security, and the new AI Cyber Security Code of Practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta AI adds pop-up warning after users share sensitive info

Meta has introduced a new pop-up in its Meta AI app, alerting users that any prompts they share may be made public. While AI chat interactions are rarely private by design, many users appeared unaware that their conversations could be published for others to see.

The Discovery feed in the Meta AI app had previously featured conversations that included intimate details—such as break-up confessions, attempts at self-diagnosis, and private photo edits.

According to multiple reports last week, these were often shared unknowingly by users who may not have realised the implications of the app’s sharing functions. Mashable confirmed this by finding such examples directly in the feed.

Now, when a user taps the ‘Share’ button on a Meta AI conversation, a new warning appears: ‘Prompts you post are public and visible to everyone. Your prompts may be suggested by Meta on other Meta apps. Avoid sharing personal or sensitive information.’ A ‘Post to feed’ button then appears below.

Although the sharing step has always required users to confirm, Business Insider reports that the feature wasn’t clearly explained—leading some users to publish their conversations unintentionally. The new alert aims to clarify that process.

As of this week, Meta AI’s Discovery feed features mostly AI-generated images and more generic prompts, often from official Meta accounts. For users concerned about privacy, there is an option in the app’s settings to opt out of the Discovery feed altogether.

Still, experts advise against entering personal or sensitive information into AI chatbots, including Meta AI. Adjusting privacy settings and avoiding the ‘Share’ feature are the best ways to protect your data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Plumbing still safe as AI replaces office jobs, says AI pioneer

Nobel Prize-winning scientist Geoffrey Hinton, often called the ‘Godfather of AI,’ has warned that many intellectual jobs are at risk of being replaced by AI—while manual trades like plumbing may remain safe for years to come.

Speaking on the Diary of a CEO podcast, Hinton predicted that AI will eventually surpass human capabilities across most fields, but said it will take far longer to master physical skills. ‘A good bet would be to be a plumber,’ he noted, citing the complexity of physical manipulation as a barrier for AI.

Hinton, known for his pioneering work on neural networks, said ‘mundane intellectual labour’ would be among the first to go. ‘AI is just going to replace everybody,’ he said, naming paralegals and call centre workers as particularly vulnerable.

He added that while highly skilled roles or those in sectors with overwhelming demand—like healthcare—may endure, most jobs are unlikely to escape the wave of disruption. ‘Most jobs, I think, are not like that,’ he said, forecasting widespread upheaval in the labour market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s robotics industry set to double by 2028, led by drones and humanoid robots

China’s robotics industry is on course to double in size by 2028, with Morgan Stanley projecting market growth from US$47 billion in 2024 to US$108 billion.

With an annual expansion rate of 23 percent, the country is expected to strengthen its leadership in this fast-evolving field. Analysts credit China’s drive for innovation and cost efficiency as key to advancing next-generation robotics.

A cornerstone of the ‘Made in China 2025’ initiative, robotics is central to the nation’s goal of dominating global high-tech industries. Last year, China accounted for 40 percent of the worldwide robotics market and over half of all industrial robot installations.

Recent data shows industrial robot production surged 35.5 percent in May, while service robot output climbed nearly 14 percent.

Morgan Stanley anticipates drones will remain China’s largest robotics segment, set to grow from US$19 billion to US$40 billion by 2028.

Meanwhile, the humanoid robot sector is expected to see an annual growth rate of 63 percent, expanding from US$300 million in 2025 to US$3.4 billion by 2030. By 2050, China could be home to 302 million humanoid robots, making up 30 percent of the global population.

The researchers describe 2025 as a milestone year, marking the start of mass humanoid robot production.

They emphasise that automation is already reshaping China’s manufacturing industry, boosting productivity and quality instead of simply replacing workers and setting the stage for a brighter industrial future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Diplo empowers Armenian civil society on digital issues

A new round of training sessions has been launched in Armenia to strengthen civil society’s understanding of digital governance. The initiative, which began on 12 June, brings together NGO representatives from both the regions and the capital to deepen their knowledge of crucial digital topics, including internet governance, AI, and digital rights.

The training program combines online and offline components, aiming to equip participants with the tools needed to actively shape the digital future of Armenia. By increasing the digital competence of civil society actors, the program aspires to promote broader democratic engagement and more informed contributions to policy discussions in the digital space.

The educational initiative is being carried out by Diplo as part of the ‘Digital Democracy for ALL’ measure by GIZ (Deutsche Gesellschaft für Internationale Zusammenarbeit), in close cooperation with several regional GIZ projects that focus on civil society and public administration reform in Eastern Partnership countries. The sessions have been praised for their depth and impact, with particular appreciation extended to Angela Saghatelyan for her leadership, and to Diplo’s experts Vladimir Radunovic, Katarina Bojovic, and Marília Maciel for their contributions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia’s Jensen Huang clashes with Anthropic CEO over AI Job loss predictions

A fresh dispute has erupted between Nvidia and Anthropic after CEO Dario Amodei warned that AI could eliminate 50% of entry-level white-collar jobs in the next five years, potentially causing a 20% unemployment spike.

Nvidia’s Jensen Huang dismissed the claim, saying at VivaTech in Paris that he ‘pretty much disagreed with almost everything’ Amodei says, accusing him of fearmongering and advocating for a monopoly on AI development.

Huang emphasized the importance of open, transparent development, stating, ‘If you want things to be done safely and responsibly, you do it in the open… Don’t do it in a dark room and tell me it’s safe.’

Anthropic pushed back, saying Amodei supports national AI transparency standards and never claimed only Anthropic can build safe AI.

The clash comes amid growing scrutiny of Anthropic, which faces a lawsuit from Reddit for allegedly scraping content without consent and controversy over a Claude 4 Opus test that simulated blackmail scenarios.

The companies have also clashed over AI export controls to China, with Anthropic urging tighter rules and Nvidia denying reports that its chips were smuggled using extreme methods like fake pregnancies or shipments with live lobsters.

Huang maintains an optimistic outlook, saying AI will create new jobs in fields like prompt engineering. At the same time, Amodei has consistently warned that the economic fallout could be severe, rejecting universal basic income as a long-term solution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!