OpenAI boss, Sam Altman, fuels debate over dead internet theory

Sam Altman, chief executive of OpenAI, has suggested that the so-called ‘dead internet theory’ may hold some truth. The idea, long dismissed as a conspiracy theory, claims much of the online world is now dominated by computer-generated content rather than real people.

Altman noted on X that he had not previously taken the theory seriously but believed there were now many accounts run by large language models.

His remark drew criticism from users who argued that OpenAI itself had helped create the problem by releasing ChatGPT in 2022, which triggered a surge of automated content.

The spread of AI systems has intensified debate over whether online spaces are increasingly filled with artificially generated voices.

Some observers also linked Altman’s comments to his work on World Network, formerly Worldcoin, a project launched in 2019 to verify human identity online through biometric scans. That initiative has been promoted as a potential safeguard against the growing influence of AI-driven systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Cloud study shows AI agents driving global business growth

A new Google Cloud study indicates that more than half of global enterprises are already using AI agents, with many reporting consistent revenue growth and faster return on investment.

The research, based on a survey of 3,466 executives across 24 countries, suggests agentic AI is moving from trial projects to large-scale deployment.

The findings by Google Cloud reveal that 52% of executives said their organisations actively use AI agents, while 39% reported launching more than ten. A group of early adopters, representing 13% of respondents, have gone further by dedicating at least half of their future AI budgets to agentic AI.

These companies are embedding agents across operations and are more likely to report returns in customer service, marketing, cybersecurity and software development.

The report also highlights how industries are tailoring adoption. Financial services focus on fraud detection, retail uses agents for quality control, and telecom operators apply them for network automation.

Regional variations are notable: European companies prioritise tech support, Latin American firms lean on marketing, while Asia-Pacific enterprises emphasise customer service.

Although enthusiasm is strong, challenges remain. Executives cited data privacy, security and integration with existing systems as key concerns.

Google Cloud executives said that early adopters are not only automating tasks but also reshaping business processes, with 2025 expected to mark a shift towards embedding AI intelligence directly into operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Coinbase relies on AI for nearly half of its code

Coinbase CEO Brian Armstrong said AI now generates around 40 per cent of the exchange’s code, expected to surpass 50 per cent by October 2025. He emphasised that human oversight remains essential, as AI cannot be uniformly applied across all areas of the platform.

Armstrong confirmed that engineers were instructed to adopt AI development tools within a week, with those resisting the mandate dismissed. The move places Coinbase ahead of technology giants such as Microsoft and Google, which use AI for roughly 30 per cent of their code.

Security experts have raised concerns about the heavy reliance on AI. Industry figures warn that AI-generated code could contain bugs or miss critical context, posing risks for a platform holding over $420 billion in digital assets.

Larry Lyu called the strategy ‘a giant red flag’ for security-sensitive businesses.

Supporters argue that Coinbase’s approach is measured. Richard Wu of Tensor said AI could generate up to 90 per cent of high-quality code within five years if paired with thorough review and testing, similar to junior engineer errors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity AI teams up with PayPal for fintech expansion

PayPal has partnered with Perplexity AI to provide PayPal and Venmo users in the US and select international markets with a free 12-month Perplexity Pro subscription and early access to the AI-powered Comet browser.

The $200 subscription allows unlimited queries, file uploads and advanced search features, while Comet offers natural language browsing to simplify complex tasks.

Industry analysts see the initiative as a way for PayPal to strengthen its position in fintech by integrating AI into everyday digital payments.

By linking accounts, users gain access to AI tools and cash back incentives and subscription management features, signalling a push toward what some describe as agentic commerce, where AI assistants guide financial and shopping decisions.

The deal also benefits Perplexity AI, a rising search and browser market challenger. Exposure to millions of PayPal customers could accelerate the adoption of its technology and provide valuable data for refining models.

Analysts suggest the partnership reflects a broader trend of payment platforms evolving into service hubs that combine transactions with AI-driven experiences.

While enthusiasm is high among early users, concerns remain about data privacy and regulatory scrutiny over AI integration in finance.

Market reaction has been positive, with PayPal shares edging upward following the announcement. Observers believe such alliances will shape the next phase of digital commerce, where payments, browsing, and AI capabilities converge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Advanced Pilot Assistance System enters year-long trial on CB Pacific

Mythos AI has installed its Advanced Pilot Assistance System (APAS) on the CB Pacific, a chemical tanker operated by CB Tankers under the Lomar group. The deployment marks the beginning of a year-long trial to introduce advanced bridge intelligence to the commercial shipping industry.

APAS uses a radar-first perception system that integrates with existing ship radars, processing multiple data streams to deliver prioritised alerts. By reducing its reliance on machine vision, the system aims to eliminate distractions, enhance decision-making, and improve navigation safety.

The CB Pacific, equipped with Furuno radar and consistent routes, will serve as a testbed to evaluate APAS performance in live conditions. Trials will assess collision prediction, safe navigation, signal processing, and compliance with maritime rules.

Mythos AI emphasises that APAS is designed to support crews, not replace them. CEO Geoff Douglass said the installation marks the company’s first operational use of the system on a tanker and a milestone in its wider commercial roadmap.

For LomarLabs, the pilot showcases its hands-on innovation model, offering vessel access and oversight to facilitate collaboration with startups. Managing Director Stylianos Papageorgiou said the radar-first architecture shows how modular autonomy can be advanced through trust, time, and fleet partnerships.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI set to spend $10bn on Broadcom AI chips

OpenAI has reportedly placed a $10bn order with Broadcom to mass-produce custom AI chips, due for shipment in 2026. Sources told the Financial Times that the move would help reduce OpenAI’s dependence on Nvidia, its primary supplier.

Sam Altman recently said OpenAI will use ‘well over 1m GPUs’ by the end of 2025, highlighting the company’s accelerating demand for computing power. In contrast, Elon Musk’s xAI is expected to double its Nvidia Hopper GPUs to around 200,000.

Broadcom confirmed a large custom chip order during its latest earnings call, without naming the buyer. The company’s AI revenue rose 63 percent to $5.2bn, chip sales grew 57 percent to $9.1bn, and shares gained nearly 5 percent.

The new order is expected to be for internal use rather than external customers. Industry observers suggest that OpenAI’s decision signals a strategic shift, allowing the ChatGPT maker to secure supply for its AI expansion while diversifying beyond Nvidia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hollywood’s Warner Bros. Discovery challenge an AI firm over copyright claims

Warner Bros. Discovery has filed a lawsuit against AI company Midjourney, accusing it of large-scale infringement of its intellectual property. The move follows similar actions by Disney and Universal, signalling growing pressure from major studios on AI image and video generators.

The filing includes examples of Midjourney-produced images featuring DC Comics, Looney Tunes and Rick and Morty characters. Warner Bros. Discovery argues that such output undermines its business model, which relies heavily on licensed images and merchandise.

The studio also claims Midjourney profits from copyright-protected works through its subscription services and the ‘Midjourney TV’ platform.

A central question in the case is whether AI-generated material reproducing copyrighted characters constitutes infringement under US law. The courts have not decided on this issue, making the outcome uncertain.

Warner Bros. Discovery is also challenging how Midjourney trains its models, pointing to past statements from company executives suggesting vast quantities of material were indiscriminately collected to build its systems.

With three major Hollywood studios now pursuing lawsuits, the outcome of these cases could establish a precedent for how courts treat AI-generated content.

Warner Bros. Discovery seeks damages that could reach $150,000 per infringed work, or Midjourney’s profits linked to the alleged violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New OpenAI platform aims to connect employers and talent

OpenAI has announced plans to launch an AI-powered hiring platform to compete with LinkedIn directly. The service, OpenAI Jobs Platform, is expected to debut by mid-2026.

Applications CEO Fidji Simo said the platform will help businesses and employees find ideal matches using AI, with tailored options for small businesses and local governments. The Texas Association of Business plans to use the platform to connect employers with talent.

The move highlights OpenAI’s efforts to expand beyond ChatGPT into a broader range of applications, including a browser, a social media app, and recruitment. The company faces intense competition from Microsoft-owned LinkedIn, which has been adding AI features of its own.

Alongside the hiring initiative, OpenAI is preparing to pilot its Certifications programme through the OpenAI Academy. The scheme will provide certificates for AI proficiency, with Walmart among the first partners.

OpenAI aims to certify 10 million Americans by 2030 as part of its commitment to advancing AI literacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPT-5 flunks Kindergarten test despite PhD-level promise

Critics quickly derided OpenAI’s newly released GPT-5 for failing tasks that a five-year-old could ace, raising questions about the disparity between hype and performance.

Despite being promoted as ‘PhD-level’, the model produced a distorted, blob-like map of North America and invented mismatched portraits of US presidents with fictional names.

AI researcher Gary Marcus lowered the threshold by giving GPT-5 a kindergarten-level challenge. The result was a clear fail. He posted: ‘GPT-5 failed a kindergarten-level task. Speechless.’ He criticised the rushed rollout and the hype that may have obscured the model’s visual reasoning weaknesses.

Further tests exposed inconsistencies: when asked to map France and label its 12 most populous cities, GPT-5 returned inaccurate or incomplete results, omitting Paris entirely and naming Orléans despite its lower ranking.

Oddly, when the same queries were posed in text-only form, the model performed better, highlighting the weakness in its image generation and visual logic.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EASA survey reveals cautious optimism over aviation AI ethics

The European Union Aviation Safety Agency (EASA) has published survey results probing the ethical outlook of aviation professionals on AI deployment, released during its AI Days event in Cologne.

The AI Days conference gathered nearly 200 on-site attendees from across the globe, with even more participating online.

The survey measured acceptance, trust and comfort across eight hypothetical AI use cases, yielding an average acceptance score of 4.4 out of 7. Despite growing interest, two-thirds of respondents declined at least one scenario.

Their key concerns included limitations of AI performance, privacy and data protection, accountability, safety risks and the potential for workforce de-skilling. A clear majority called for stronger regulation and oversight by EASA and national authorities.

In a keynote address, Christine Berg from the European Commission highlighted that AI in aviation is already practical, optimising air traffic flow and predictive maintenance, while emphasising the need for explainable, reliable and certifiable systems under the EU AI Act.

Survey findings will feed into EASA’s AI Roadmap and prompt public consultations as the agency advances policy and regulatory frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!