Sam Altman predicts AGI could arrive before 2030

OpenAI CEO Sam Altman has warned that AI could soon automate up to 40 percent of the tasks humans currently perform. He made the remarks in an interview with German newspaper Die Welt, highlighting the potential economic shift AI will trigger.

Altman described OpenAI’s latest model, GPT-5, as the most advanced yet and claimed it is ‘smarter than me and most people’. He said artificial general intelligence (AGI), capable of outperforming humans in all areas, could arrive before 2030.

Instead of focusing on job losses, Altman suggested examining the percentage of tasks that AI will automate. He predicted that 30 to 40 per cent of tasks currently carried out by humans may soon be completed by AI systems.

These comments contribute to the growing debate about the societal impact of AI, with mass layoffs already being linked to automation. Altman emphasised that this wave of change will reshape economies and workplaces, requiring businesses and governments to prepare for disruption.

As AGI approaches, Altman urged individuals to focus on acquiring in-demand skills to stay relevant in an AI-enabled economy. The relationship between humans and machines, he said, will be permanently reshaped by these developments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Qwen3-Omni tops Hugging Face as China’s open AI challenge grows

Alibaba’s Qwen3-Omni multimodal AI system has quickly risen to the top of Hugging Face’s trending model list, challenging closed systems from OpenAI and Google. The series unifies text, image, audio, and video processing in a single model, signalling the rapid growth of Chinese open-source AI.

Qwen3-Omni-30B-A3B currently leads Hugging Face’s list, followed by the image-editing model Qwen-Image-Edit-2509. Alibaba’s cloud division describes Qwen3-Omni as the first fully integrated multimodal AI framework built for real-world applications.

Self-reported benchmarks suggest Qwen3-Omni outperforms Qwen2.5-Omni-7B, OpenAI’s GPT-4o, and Google’s Gemini-2.5-Flash, known as ‘Nano Banana’, in audio recognition, comprehension, and video understanding tasks.

Open-source dominance is growing, with Alibaba’s models taking half the top 10 spots on Hugging Face rankings. Tencent, DeepSeek, and OpenBMB filled most of the remaining positions, leaving IBM as the only Western representative.

The ATOM Project warned that US leadership in AI could erode as open models from China gain adoption. It argued that China’s approach draws businesses and researchers away from American systems, which have become increasingly closed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The strategic shift toward open-source AI

The release of DeepSeek’s open-source reasoning model in January 2025, followed by the Trump administration’s July endorsement of open-source AI as a national priority, has marked a turning point in the global AI race, writes Jovan Kurbalija in his blog ‘The strategic imperative of open source AI’.

What once seemed an ideological stance is now being reframed as a matter of geostrategic necessity. Despite their historical reliance on proprietary systems, China and the United States have embraced openness as the key to competitiveness.

Kurbalija adds that history offers clear lessons that open systems tend to prevail. Just as TCP/IP defeated OSI in the 1980s and Linux outpaced costly proprietary operating systems in the 1990s, today’s open-source AI models are challenging closed platforms. Companies like Meta and DeepSeek have positioned their tools as the new foundations of innovation, while proprietary players such as OpenAI are increasingly seen as constrained by their closed architectures.

The advantages of open-source AI are not only philosophical but practical. Open models evolve faster through global collaboration, lower costs by sharing development across vast communities, and attract younger talent motivated by purpose and impact.

They are also more adaptable, making integrating into industries, education, and governance easier. Importantly, breakthroughs in efficiency show that smaller, smarter models can now rival giant proprietary systems, further broadening access.

The momentum is clear. Open-source AI is emerging as the dominant paradigm. Like the internet protocols and operating systems that shaped previous digital eras, openness is proving more ethical and strategically effective. As researchers, governments, and companies increasingly adopt this approach, open-source AI could become the backbone of the next phase of the digital world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk escalates legal battle with new lawsuit against OpenAI

Elon Musk’s xAI has sued OpenAI, alleging a coordinated and unlawful campaign to steal its proprietary technology. The complaint alleges OpenAI targeted former xAI staff to steal source code, training methods, and data centre strategies.

The lawsuit claims OpenAI recruiter Tifa Chen offered large packages to engineers who then allegedly uploaded xAI’s source code to personal devices. Notable incidents include Xuechen Li confessing to code theft and Jimmy Fraiture allegedly transferring confidential files via AirDrop repeatedly.

Legal experts note the case centres on employee poaching and the definition of xAI’s ‘secret sauce,’ including GPU racking, vendor contracts, and operational playbooks.

Liability may depend on whether OpenAI knowingly directed recruiters, while the company could defend itself by showing independent creation with time-stamped records.

xAI is seeking damages, restitution, and injunctions requiring OpenAI to remove its materials and destroy models built using them. The lawsuit is Musk’s latest legal action against OpenAI, following a recent antitrust case with Apple over alleged market dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tech giants warn Digital Markets Act is failing

Apple and Google have urged the European Union to revisit its Digital Markets Act, arguing the law is damaging users and businesses.

Apple said the rules have forced delays to new features for European customers, including live translation on AirPods and improvements to Apple Maps. It warned that competition requirements could weaken security and slow innovation without boosting the EU economy.

Google raised concerns that its search results must now prioritise intermediary travel sites, leading to higher costs for consumers and fewer direct sales for airlines and hotels. It added that AI services may arrive in Europe up to a year later than elsewhere.

Both firms stressed that enforcement should be more consistent and user-focused. The European Commission is reviewing the Act, with formal submissions under consideration.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

CISA warns of advanced campaign exploiting Cisco appliances in federal networks

US cybersecurity officials have issued an emergency directive after hackers breached a federal agency by exploiting critical flaws in Cisco appliances. CISA warned the campaign poses a severe risk to government networks.

Experts told CNN they believe the hackers are state-backed and operating out of China, raising alarm among officials. Hundreds of compromised devices are reportedly in use across the federal government, CISA stated, issuing a directive to rapidly assess the scope of this major breach.

Cisco confirmed it was urgently alerted to the breaches by US government agencies in May and quickly assigned a specialised team to investigate. The company provided advanced detection tools, worked intensely to analyse compromised environments, and examined firmware from infected devices.

Cisco stated that the attackers exploited multiple zero-day flaws and employed advanced evasion techniques. It suspects a link to the ArcaneDoor campaign reported in early 2024.

CISA has withheld details about which agencies were affected or the precise nature of the breaches, underscoring the gravity of the situation. Investigations are currently underway to contain the ongoing threat and prevent further exploitation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK government considers supplier aid after JLR cyberattack

Jaguar Land Rover (JLR) is recovering from a disruptive cyberattack, gradually bringing its systems back online. The company is focused on rebuilding its operations, aiming to restore confidence and momentum as key digital functions are restored.

JLR said it has boosted its IT processing capacity for invoicing to clear its payment backlog. The Global Parts Logistics Centre is also resuming full operations, restoring parts distribution to retailers.

The financial system used for processing vehicle wholesales has been restored, allowing the company to resume car sales and registration. JLR is collaborating with the UK’s NCSC and law enforcement to ensure a secure restart of operations.

Production remains suspended at JLR’s three UK factories in Halewood, Solihull, and Wolverhampton. The company typically produces around 1,000 cars a day, but staff have been instructed to stay at home since the August cyberattack.

The government is considering support packages for the company’s suppliers, some of whom are under financial pressure. A group identifying itself as Scattered Lapsus$ Hunters has claimed responsibility for the incident.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn expands AI training with default data use

LinkedIn will use member profile data to train its AI systems by default from 3 November 2025. The policy, already in place in the US and select markets, will now extend to more regions, mainly for 18+ users who prefer not to share their information and must opt out manually via account settings.

According to LinkedIn, the types of data that may be used include account details, email addresses, payment and subscription information, and service-related data such as IP addresses, device IDs, and location information.

Once disabled, profiles will no longer be added to AI training, although information collected earlier may remain in the system. Users can request the removal of past data through a Data Processing Objection Form.

Meta and X have already adopted similar practices in the US, allowing their platforms to use user-generated posts for AI training. LinkedIn insists its approach complies with privacy rules but leaves the choice in members’ hands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Content Signals Policy by Cloudflare lets websites signal data use preferences

Cloudflare has announced the launch of its Content Signals Policy, a new extension to robots.txt that allows websites to express their preferences for how their data is used after access. The policy is designed to help creators maintain open content while preventing misuse by data scrapers and AI trainers.

The new tool enables website owners to specify, in a machine-readable format, whether they permit search indexing, AI input, or AI model training. Operators can set each signal to ‘yes,’ ‘no,’ or leave it blank to indicate no stated preference, providing them with fine-grained control over their responses.

Cloudflare says the policy tackles the free-rider problem, where scraped content is reused without credit. With bot traffic set to surpass human traffic by 2029, it calls for clear, standard rules to protect creators and keep the web open.

Customers already using Cloudflare’s managed robots.txt will have the policy automatically applied, with a default setting that allows search but blocks AI training. Sites without a robots.txt file can opt in to publish the human-readable policy text and add their own preferences when ready.

Cloudflare emphasises that content signals are not enforcement mechanisms but a means of communicating expectations. It is releasing the policy under a CC0 licence to encourage broad adoption and is working with standards bodies to ensure the rules are recognised across the industry.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Quantum-classical hybrid outperforms, according to HSBC and IBM study

HSBC and IBM have reported the first empirical evidence of the value of quantum computers in solving real-world problems in bond trading. Their joint trial showed a 34% improvement in predicting the likelihood of a trade being filled at a quoted price compared to classical-only techniques.

The trial used a hybrid approach that combined quantum and classical computing to optimise quote requests in over-the-counter bond markets. Production-scale trading data from the European corporate bond market was run on IBM quantum computers to predict winning probabilities.

The results demonstrate how quantum techniques can outperform standard methods in addressing the complex and dynamic factors in algorithmic bond trading. HSBC said the findings offer a competitive edge and could redefine how the financial industry prices customer inquiries.

Philip Intallura, HSBC Group Head of Quantum Technologies, called the trial ‘a ground-breaking world-first in bond trading’. He said the results show that quantum computing is on the cusp of delivering near-term value for financial services.

IBM’s latest Heron processor played a key role in the workflow, augmenting classical computation to uncover hidden pricing signals in noisy data. IBM said such work helps unlock new algorithms and applications that could transform industries as quantum systems scale.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!