Plumbing still safe as AI replaces office jobs, says AI pioneer

Nobel Prize-winning scientist Geoffrey Hinton, often called the ‘Godfather of AI,’ has warned that many intellectual jobs are at risk of being replaced by AI—while manual trades like plumbing may remain safe for years to come.

Speaking on the Diary of a CEO podcast, Hinton predicted that AI will eventually surpass human capabilities across most fields, but said it will take far longer to master physical skills. ‘A good bet would be to be a plumber,’ he noted, citing the complexity of physical manipulation as a barrier for AI.

Hinton, known for his pioneering work on neural networks, said ‘mundane intellectual labour’ would be among the first to go. ‘AI is just going to replace everybody,’ he said, naming paralegals and call centre workers as particularly vulnerable.

He added that while highly skilled roles or those in sectors with overwhelming demand—like healthcare—may endure, most jobs are unlikely to escape the wave of disruption. ‘Most jobs, I think, are not like that,’ he said, forecasting widespread upheaval in the labour market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT loses chess match to Atari 2600

ChatGPT, was trounced in a chess match by a 1979 video game running on an Atari 2600 emulator. Citrix engineer Robert Caruso set up the match using Video Chess to test how the AI would perform against vintage gaming software.

The result was unexpectedly lopsided. ChatGPT confused rooks for bishops, forgot piece positions and made repeated beginner mistakes, eventually asking for the match to be restarted. Even when standard chess notation was used, its performance failed to improve.

Caruso described the 90-minute session as full of basic blunders, saying the AI would have been laughed out of a primary school chess club. His post highlighted the limitations of ChatGPT’s architecture, which is built for language understanding, not strategic board gameplay.

While the experiment doesn’t mean ChatGPT is entirely useless at chess, it suggests users are better off discussing the game with the bot than challenging it. OpenAI has not yet responded to the light-hearted but telling critique.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI considers antitrust action against Microsoft over AI hosting control

OpenAI reportedly tries to reduce Microsoft’s exclusive control over hosting its AI models, signalling growing friction between the two companies.

According to the Wall Street Journal, OpenAI leadership has considered filing an antitrust complaint against Microsoft, alleging anti-competitive behaviour in their ongoing collaboration. The move could trigger federal regulatory scrutiny.

The tension comes amid ongoing talks over OpenAI’s corporate restructuring. A report by The Information suggests that OpenAI is negotiating to grant Microsoft a 33% stake in its reorganized for-profit unit. In exchange, Microsoft would give up rights to future profits.

OpenAI also wants to revise its existing contract with Microsoft, particularly clauses that grant exclusive Azure hosting rights. The company reportedly aims to exclude its planned $3 billion acquisition of AI startup Windsurf from the agreement, which otherwise gives Microsoft access to OpenAI’s intellectual property.

This developing rift could reshape one of the most influential alliances in AI. Microsoft has invested heavily in OpenAI since 2019 and integrates its models into Microsoft 365 Copilot and Azure services. However, both firms are diversifying.

OpenAI is turning to Google Cloud and Oracle for additional computing power, while Microsoft has begun integrating alternative AI models into its products.

Industry experts warn that regulatory scrutiny or contract changes could impact enterprise customers relying on tightly integrated AI solutions, particularly in sectors like healthcare and finance. Companies may face service disruptions, higher costs, or compatibility challenges if major players shift strategy or infrastructure.

Analysts suggest that the era of single-model reliance may be ending. As innovation from rivals like DeepSeek accelerates, enterprises and cloud providers are moving toward multi-model support, aiming for modular, scalable, and use-case-specific AI deployments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CISOs warn AI-driven cyberattacks are rising, with DNS infrastructure at risk

A new report warns that chief information security officers (CISOs) are bracing for a sharp increase in cyber-attacks as AI continues to reshape the global threat landscape. According to CSC’s report, 98% of CISOs expect rising attacks over the next three years, with domain infrastructure a key concern.

AI-powered domain generation algorithms (DGAs) have been flagged as a key threat by 87% of security leaders. Cyber-squatting, DNS hijacking, and DDoS attacks remain top risks, with nearly all CISOs expressing concern over bad actors’ increasing use of AI.

However, only 7% said they feel confident in defending against domain-based threats.

Concerns have also been raised about identity verification. Around 99% of companies worry their domain registrars fail to apply adequate Know Your Customer (KYC) policies, leaving them vulnerable to infiltration.

Meanwhile, half of organisations have not implemented or tested a formal incident response plan or adopted AI-driven monitoring tools.

Budget constraints continue to limit cybersecurity readiness. Despite the growing risks, only 7% of CISOs reported a significant increase in security budgets between 2024 and 2025. CSC’s Ihab Shraim warned that DNS infrastructure is a prime target and urged firms to act before facing technical and reputational fallout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Workplace deepfake abuse: What employers must know

Deepfake technology—AI-generated videos, images, and audio—has entered the workplace in alarming ways.

Once difficult to produce, deepfakes are now widely accessible and are being used to harass, impersonate, or intimidate employees. These synthetic media attacks can cause deep psychological harm, damage reputations, and expose employers to serious legal risks.

While US federal law hasn’t yet caught up, new legislation like the Take It Down Act and Florida’s Brooke’s Law require platforms to remove non-consensual deepfake content within 48 hours.

Meanwhile, employers could face claims under existing workplace laws if they fail to act on deepfake harassment. Inaction may lead to lawsuits for creating a hostile environment or for negligent oversight.

Most workplace policies still don’t mention synthetic media and something like this creates blind spots, especially during investigations, where fake images or audio could wrongly influence decisions.

Employers need to shift how they assess evidence and protect both accused and accuser fairly. It’s time to update handbooks, train staff, and build clear response plans that include digital impersonation and deepfake abuse.

By treating deepfakes as a modern form of harassment instead of just a tech issue, organisations can respond faster, protect staff, and maintain trust. Proactive training, updated policies, and legal awareness will be crucial to workplace safety in the age of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT and generative AI have polluted the internet — and may have broken themselves

The explosion of generative AI tools like ChatGPT has flooded the internet with low-quality, AI-generated content, making it harder for future models to learn from authentic human knowledge.

As AI continues to train on increasingly polluted data, a loop forms in which AI imitates already machine-made content, leading to a steady drop in originality and usefulness. The worrying trend is referred to as ‘model collapse’.

To illustrate the risk, researchers compare clean pre-AI data to ‘low-background steel’ — a rare kind of steel made before nuclear testing in 1945, which remains vital for specific medical and scientific uses.

Just as modern steel became contaminated by radiation, modern data is being tainted by artificial content. Cambridge researcher Maurice Chiodo notes that pre-2022 data is now seen as ‘safe, fine, clean’, while everything after is considered ‘dirty’.

A key concern is that techniques like retrieval-augmented generation, which allow AI to pull real-time data from the internet, risk spreading even more flawed content. Some research already shows that it leads to more ‘unsafe’ outputs.

If developers rely on such polluted data, scaling models by adding more information becomes far less effective, potentially hitting a wall in progress.

Chiodo argues that future AI development could be severely limited without a clean data reserve. He and his colleagues urge the introduction of clear labelling and tighter controls on AI content.

However, industry resistance to regulation might make meaningful reform difficult, raising doubts about whether the pollution can be reversed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Scientists convert brain signals into words using AI

Australian scientists have developed an AI model that converts brainwaves into spoken words and sentences using a wearable EEG cap.

The system, created at the University of Technology Sydney, marks a significant step in communication technology and cognitive care.

The deep learning model, designed by Daniel Leong, Charles Zhou, and Chin-Teng Lin, currently works with a limited vocabulary but has achieved around 75% accuracy. Researchers aim to improve this to 90% by expanding training data and refining brainwave analysis.

Bioelectronics expert Mohit Shivdasani noted that AI now detects neural patterns previously hidden from human interpretation. Future uses include real-time thought-to-text interfaces or direct communication between people via brain signals.

However, breakthrough opens new possibilities for patients with speech or movement impairments, pointing to future human-machine interaction that bypasses traditional input methods.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia’s sovereign AI vision gains EU support

Nvidia CEO Jensen Huang’s call for ‘sovereign AI’ is gaining traction among European leaders who want more control over their data and digital future. He argues that nations must develop AI rooted in their own language, culture and infrastructure.

During a recent European tour, Huang unveiled major partnerships and investments European cities, citing the region’s over-reliance on US tech firms. European officials echoed his concerns, with French President Emmanuel Macron and German Chancellor Friedrich Merz supporting national AI initiatives.

The EU plans to build four AI gigafactories, aiming to reduce dependence on US cloud giants and strengthen regional innovation. Nvidia has committed to providing chips for these projects, while startups like Mistral are working to become local leaders in AI development.

Despite enthusiasm, high energy costs and limited resources may hinder Europe’s progress. Industry voices warn that without sustained investment, the region could struggle to match the spending power of US hyperscalers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Smart machines, dark intentions: UN urges global action on AI threats

The United Nations has warned that terrorists could seize control of AI-powered vehicles to launch devastating attacks in public spaces. A new report outlines how extremists might exploit autonomous cars and drones to bypass traditional defences.

AI is also feared to be a tool for facial recognition targeting and mass ‘swarm’ assaults using aerial devices. Experts suggest that key parts of modern infrastructure could be turned against the public if hacked.

Britain’s updated counter-terrorism strategy now reflects these growing concerns, including the risk of AI-generated propaganda and detailed attack planning. The UN has called for immediate global cooperation to limit how such technologies can be misused.

Security officials maintain that AI also offers valuable tools in the fight against extremism, enabling quicker intelligence processing and real-time threat identification. Nonetheless, authorities have been urged to prepare for worst-case scenarios involving AI-directed violence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Armenia plans major AI hub with NVIDIA and Firebird

Armenia has unveiled plans to develop a $500mn AI supercomputing hub in partnership with US tech leader NVIDIA, AI cloud firm Firebird, and local telecoms group Team.

Announced at the Viva Technology conference in Paris, the initiative marks the largest tech investment ever seen in the South Caucasus.

Due to open in 2026, the facility will house thousands of NVIDIA’s Blackwell GPUs and offer more than 100 megawatts of scalable computing power. Designed to advance AI research, training and entrepreneurship, the hub aims to position Armenia as a leading player in global AI development.

Prime Minister Nikol Pashinyan described the project as the ‘Stargate of Armenia’, underscoring its potential to transform the national tech sector.

Firebird CEO Razmig Hovaghimian said the hub would help develop local talent and attract international attention, while the Afeyan Foundation, led by Noubar Afeyan, is set to come on board as a founding investor.

Instead of limiting its role to funding, the Armenian government will also provide land, tax breaks and simplified regulation to support the project, strengthening its push toward a competitive digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!