Social media authenticity questioned as Altman points to bot-like behaviour

Sam Altman, X enthusiast and Reddit shareholder, has expressed doubts over whether social media content can still be distinguished from bot activity. His remarks followed an influx of praise for OpenAI Codex on Reddit, where users questioned whether such posts were genuine.

Altman noted that humans are increasingly adopting quirks of AI-generated language, blurring the line between authentic and synthetic speech. He also pointed to factors such as social media optimisation for engagement and astroturfing campaigns, which amplify suspicions of fakery.

The comments follow OpenAI’s backlash over the rollout of GPT-5, which saw Reddit communities shift from celebratory to critical. Altman acknowledged flaws in a Reddit AMA, but the fallout left lasting scepticism and lower enthusiasm among AI users.

Underlying this debate is the wider reality that bots dominate much of the online environment. Imperva estimates that more than half of 2024’s internet traffic was non-human, while X’s own Grok chatbot admitted to hundreds of millions of bots on the platform.

Some observers suggest Altman’s comments may foreshadow an OpenAI-backed social media venture. Whether such a project could avoid the same bot-related challenges remains uncertain, with research suggesting that even bot-only networks eventually create echo chambers of their own.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI adoption drops at large US companies for the first time since 2023

Despite the hype surrounding AI, new data suggests corporate adoption of AI is slowing.

A biweekly survey by the US Census Bureau found AI use among firms with over 250 employees dropped from nearly 14 percent in mid-June to under 12 percent in August, marking the largest decline since the survey began in November 2023.

Smaller companies with fewer than four workers saw a slight increase, but mid-sized businesses largely reported flat or falling AI adoption. The findings are worrying for tech investors and CEOs, who have invested heavily in enterprise AI in the hope of boosting productivity and revenue across industries.

So far, up to 95 per cent of companies using AI have not generated new income from the technology.

The decline comes amid underwhelming performance from high-profile AI releases. OpenAI’s GPT-5, expected to revolutionise enterprise AI, underperformed in benchmark tests, while some companies are rehiring human staff after previously reducing headcount based on AI promises.

Analysts warn that AI innovations may have plateaued, leaving enterprise adoption struggling to justify prior investments.

Unless enterprise AI starts delivering measurable results, corporate usage could continue to decline, signalling a potential slowdown in the broader AI-driven growth many had anticipated.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic AI faces legal setback in authors’ piracy lawsuit

A federal judge has rejected the $1.5 billion settlement Anthropic agreed to in a piracy lawsuit filed by authors.

Judge William Alsup expressed concerns that the deal was ‘nowhere close to complete’ and could be forced on writers without proper input.

The lawsuit involves around 500,000 authors whose works were allegedly used without permission to train Anthropic’s large language models. The proposed settlement would have granted $3,000 per work, a sum far exceeding previous copyright recoveries.

However, the judge criticised the lack of clarity regarding the list of works, authors, notification process, and claim forms.

Alsup instructed the lawyers to provide clear notice to class members and allow them to opt in or out. He also emphasised that Anthropic must be shielded from future claims on the same issue. The court set deadlines for a final list of works by September 15 and approval of all related documents by October 10.

The ruling highlights ongoing legal challenges for AI companies using copyrighted material for training large language models instead of relying solely on licensed or public-domain data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake GitHub downloads deliver GPUGate malware to EU IT staff

A malvertising campaign is targeting IT workers in the EU with fake GitHub Desktop installers, according to Arctic Wolf. The goal is to steal credentials, deploy ransomware, and infiltrate sensitive systems. The operation has reportedly been active for over six months.

Attackers used malicious Google Ads that redirected users to doctored GitHub repositories. Modified README files mimicked genuine download pages but linked to a lookalike domain. MacOS users received the AMOS Stealer, while Windows victims downloaded bloated installers hiding malware.

The Windows malware evaded detection using GPU-based checks, refusing to run in sandboxes that lacked real graphics drivers. On genuine machines, it copied itself to %APPDATA%, sought elevated privileges, and altered Defender settings. Analysts dubbed the technique GPUGate.

The payload persisted by creating privileged tasks and sideloading malicious DLLs into legitimate executables. Its modular system could download extra malware tailored to each victim. The campaign was geo-fenced to EU targets and relied on redundant command servers.

Researchers warn that IT staff are prime targets due to their access to codebases and credentials. With the campaign still active, Arctic Wolf has published indicators of compromise, Yara rules, and security advice to mitigate the GPUGate threat.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Orson Welles lost film reconstructed with AI

More than 80 years after Orson Welles’ The Magnificent Ambersons was cut and lost, AI is being used to restore 43 missing minutes of the film.

Amazon-backed Showrunner, led by Edward Saatchi, is experimenting with AI technology to rebuild the destroyed sequences as part of a broader push to reimagine how Hollywood might use AI in storytelling.

The project is not intended for commercial release, since Showrunner has not secured rights from Warner Bros. or Concord, but instead aims to explore what could have been the director’s original vision.

The initiative marks a shift in the role of AI in filmmaking. Rather than serving only as a tool for effects, dubbing or storyboarding, it is being positioned as a foundation for long-form narrative creation.

Showrunner is developing AI models capable of sustaining complex plots, with the goal of eventually generating entire films. Saatchi envisions the platform as a type of ‘Netflix of AI,’ where audiences might one day interact with intellectual property and generate their own stories.

To reconstruct The Magnificent Ambersons, the company is combining traditional techniques with AI tools. New sequences will be shot with actors, while AI will be used for face and pose transfer to replicate the original cast.

Thousands of archival set photographs are being used to digitally recreate the film’s environments.

Filmmaker Brian Rose, who has rebuilt 30,000 missing frames over five years, has reconstructed set movements and timing to match the lost scenes, while VFX expert Tom Clive will assist in refining the likenesses of the original actors.

A project that underlines both the creative possibilities and ethical tensions surrounding AI in cinema. While the reconstructed footage will not be commercially exploited, it raises questions about the use of copyrighted material in training AI and the risk of replacing human creators.

For many, however, the experiment offers a glimpse of what Welles’ ambitious work might have looked like had it survived intact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI study links AI hallucinations to flawed testing incentives

OpenAI researchers say large language models continue to hallucinate because current evaluation methods encourage them to guess rather than admit uncertainty.

Hallucinations, defined as confident but false statements, persist despite advances in models such as GPT-5. Low-frequency facts, like specific dates or names, are particularly vulnerable.

The study argues that while pretraining predicts the next word without true or false labels, the real problem lies in accuracy-based testing. Evaluations that reward lucky guesses discourage models from saying ‘I don’t know’.

Researchers suggest penalising confident errors more heavily than uncertainty, and awarding partial credit when AI models acknowledge limits in knowledge. They argue that only by reforming evaluation methods can hallucinations be meaningfully reduced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Quantum-proof cryptography emerges as key test for stablecoins

Stablecoins have become central to the digital economy, with billions in daily transactions and stronger regulatory backing under the GENIUS Act. Yet experts warn that advances in quantum computing could undermine their very foundations.

Elliptic curve and RSA cryptography, widely used in stablecoin systems, are expected to be breakable once ‘Q-Day’ arrives. Quantum-equipped attackers could instantly derive private keys from public addresses, exposing entire networks to theft.

The immutability of blockchains makes upgrading cryptographic schemes especially challenging. Dormant wallets and legacy addresses may prove vulnerable, putting billions of dollars at risk if issuers fail to take action promptly.

Researchers highlight lattice-based and hash-based algorithms as viable ‘quantum-safe’ alternatives. Stablecoins built with crypto-agility, enabling seamless upgrades, will better adapt to new standards and avoid disruptive forks.

Regulators are also moving. NIST is finalising post-quantum cryptographic standards, and new rules will likely be established before 2030. Stablecoins that embed resilience today may set the global benchmark for digital trust in the quantum age.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI threatens the future of entry level jobs

The rise of AI puts traditional entry-level roles under pressure, raising concerns that career ladders may no longer function as they once did. Industry leaders, including Anthropic CEO Dario Amodei, warn that AI could replace half of all entry-level jobs as machines operate nonstop.

A venture capital firm, SignalFire, found that hiring for graduates with under one year of experience at major tech firms fell by 50% between 2019 and 2024. The decline has been consistent across business functions, from sales and marketing to engineering and operations.

Analysts argue that while career pathways are being reshaped, the ladder’s bottom rung is disappearing, forcing graduates to acquire skills independently before entering the workforce.

Experts stress that the shift does not mean careers are over for new graduates, but it does signal a more challenging transition. Universities are already adapting by striking partnerships with AI companies, while some economists point out that past technological revolutions took decades to reshape employment.

Yet others warn that unchecked AI could eventually threaten entry-level roles and all levels of work, raising questions about the future stability of corporate structures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Mistral AI pushes growth with new funding and global deals

Founded in 2023 by ex-Google DeepMind and Meta researchers, Mistral has quickly gained global attention with its open-source models and consumer app, which hit one million downloads within two weeks of launch.

Mistral AI is now seeking fresh funding at a reported $14 billion valuation, more than double its worth just a year ago. Its investors include Microsoft, Nvidia, Cisco, and Bpifrance, and it has signed partnerships with AFP, Stellantis, Orange, and France’s army.

Its growing suite of models spans large language, audio, coding, and reasoning systems, while its enterprise tools integrate with services such as Asana and Google Drive. French president Emmanuel Macron has openly endorsed the firm, framing it as a strategic alternative to US dominance in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI backs AI-generated film Critterz for 2026 release

OpenAI is supporting the production of Critterz, an AI-assisted animated film set for a global theatrical release in 2026. The project aims to show that AI can streamline filmmaking, cutting costs and production time.

Partnering with Vertigo Films and Native Foreign, the film is being produced in nine months, far faster than the usual three years for animated features.

The film, budgeted under $30 million, combines OpenAI’s GPT-5 and DALL·E with traditional voice acting and hand-drawn elements. Building on the acclaimed 2023 short, Critterz will debut at the Cannes Film Festival and expand on a story where humans and AI creatures share the same world.

Writers James Lamont and Jon Foster, known for Paddington in Peru, have been brought in to shape the screenplay.

While producers highlight AI’s creative potential, concerns remain about authenticity and job security in the industry. Some fear AI films could feel impersonal, while major studios continue to defend intellectual property.

Warner Bros., Disney, and Universal are in court with Midjourney over alleged copyright violations.

Despite the debate, OpenAI remains committed to its role in pushing generative storytelling. The company is also expanding its infrastructure, forecasting spending of $115 billion by 2029, with $8 billion planned for this year alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot