Social media authenticity questioned as Altman points to bot-like behaviour

Sam Altman, X enthusiast and Reddit shareholder, has expressed doubts over whether social media content can still be distinguished from bot activity. His remarks followed an influx of praise for OpenAI Codex on Reddit, where users questioned whether such posts were genuine.

Altman noted that humans are increasingly adopting quirks of AI-generated language, blurring the line between authentic and synthetic speech. He also pointed to factors such as social media optimisation for engagement and astroturfing campaigns, which amplify suspicions of fakery.

The comments follow OpenAI’s backlash over the rollout of GPT-5, which saw Reddit communities shift from celebratory to critical. Altman acknowledged flaws in a Reddit AMA, but the fallout left lasting scepticism and lower enthusiasm among AI users.

Underlying this debate is the wider reality that bots dominate much of the online environment. Imperva estimates that more than half of 2024’s internet traffic was non-human, while X’s own Grok chatbot admitted to hundreds of millions of bots on the platform.

Some observers suggest Altman’s comments may foreshadow an OpenAI-backed social media venture. Whether such a project could avoid the same bot-related challenges remains uncertain, with research suggesting that even bot-only networks eventually create echo chambers of their own.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Superconducting qubits power Stanford’s quantum router advance

Quantum computers could become more efficient with a new quantum router that directs data more quickly within machines. Researchers at Stanford have built the component, which could eventually form the backbone of quantum random access memory (QRAM).

The router utilises superconducting qubits, controlled by electromagnetic pulses, to transmit information to quantum addresses. Unlike classical routers, it can encode addresses in superposition, allowing data to be stored in two places simultaneously.

In tests with three qubits, the router achieved a fidelity of around 95%. If integrated into QRAM, it could unlock new algorithms by placing information into quantum states where locations remain indeterminate.

Experts say the advance could benefit areas such as quantum machine learning and database searches. It may also support future ideas, such as quantum IP addresses, although more reliable designs with larger qubit counts are still required.

The Stanford team acknowledges the device needs refinement to reduce errors. But with further development, the quantum router could be a vital step toward practical QRAM and more powerful quantum computing applications.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic AI faces legal setback in authors’ piracy lawsuit

A federal judge has rejected the $1.5 billion settlement Anthropic agreed to in a piracy lawsuit filed by authors.

Judge William Alsup expressed concerns that the deal was ‘nowhere close to complete’ and could be forced on writers without proper input.

The lawsuit involves around 500,000 authors whose works were allegedly used without permission to train Anthropic’s large language models. The proposed settlement would have granted $3,000 per work, a sum far exceeding previous copyright recoveries.

However, the judge criticised the lack of clarity regarding the list of works, authors, notification process, and claim forms.

Alsup instructed the lawyers to provide clear notice to class members and allow them to opt in or out. He also emphasised that Anthropic must be shielded from future claims on the same issue. The court set deadlines for a final list of works by September 15 and approval of all related documents by October 10.

The ruling highlights ongoing legal challenges for AI companies using copyrighted material for training large language models instead of relying solely on licensed or public-domain data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Orson Welles lost film reconstructed with AI

More than 80 years after Orson Welles’ The Magnificent Ambersons was cut and lost, AI is being used to restore 43 missing minutes of the film.

Amazon-backed Showrunner, led by Edward Saatchi, is experimenting with AI technology to rebuild the destroyed sequences as part of a broader push to reimagine how Hollywood might use AI in storytelling.

The project is not intended for commercial release, since Showrunner has not secured rights from Warner Bros. or Concord, but instead aims to explore what could have been the director’s original vision.

The initiative marks a shift in the role of AI in filmmaking. Rather than serving only as a tool for effects, dubbing or storyboarding, it is being positioned as a foundation for long-form narrative creation.

Showrunner is developing AI models capable of sustaining complex plots, with the goal of eventually generating entire films. Saatchi envisions the platform as a type of ‘Netflix of AI,’ where audiences might one day interact with intellectual property and generate their own stories.

To reconstruct The Magnificent Ambersons, the company is combining traditional techniques with AI tools. New sequences will be shot with actors, while AI will be used for face and pose transfer to replicate the original cast.

Thousands of archival set photographs are being used to digitally recreate the film’s environments.

Filmmaker Brian Rose, who has rebuilt 30,000 missing frames over five years, has reconstructed set movements and timing to match the lost scenes, while VFX expert Tom Clive will assist in refining the likenesses of the original actors.

A project that underlines both the creative possibilities and ethical tensions surrounding AI in cinema. While the reconstructed footage will not be commercially exploited, it raises questions about the use of copyrighted material in training AI and the risk of replacing human creators.

For many, however, the experiment offers a glimpse of what Welles’ ambitious work might have looked like had it survived intact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI study links AI hallucinations to flawed testing incentives

OpenAI researchers say large language models continue to hallucinate because current evaluation methods encourage them to guess rather than admit uncertainty.

Hallucinations, defined as confident but false statements, persist despite advances in models such as GPT-5. Low-frequency facts, like specific dates or names, are particularly vulnerable.

The study argues that while pretraining predicts the next word without true or false labels, the real problem lies in accuracy-based testing. Evaluations that reward lucky guesses discourage models from saying ‘I don’t know’.

Researchers suggest penalising confident errors more heavily than uncertainty, and awarding partial credit when AI models acknowledge limits in knowledge. They argue that only by reforming evaluation methods can hallucinations be meaningfully reduced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Quantum-proof cryptography emerges as key test for stablecoins

Stablecoins have become central to the digital economy, with billions in daily transactions and stronger regulatory backing under the GENIUS Act. Yet experts warn that advances in quantum computing could undermine their very foundations.

Elliptic curve and RSA cryptography, widely used in stablecoin systems, are expected to be breakable once ‘Q-Day’ arrives. Quantum-equipped attackers could instantly derive private keys from public addresses, exposing entire networks to theft.

The immutability of blockchains makes upgrading cryptographic schemes especially challenging. Dormant wallets and legacy addresses may prove vulnerable, putting billions of dollars at risk if issuers fail to take action promptly.

Researchers highlight lattice-based and hash-based algorithms as viable ‘quantum-safe’ alternatives. Stablecoins built with crypto-agility, enabling seamless upgrades, will better adapt to new standards and avoid disruptive forks.

Regulators are also moving. NIST is finalising post-quantum cryptographic standards, and new rules will likely be established before 2030. Stablecoins that embed resilience today may set the global benchmark for digital trust in the quantum age.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI threatens the future of entry level jobs

The rise of AI puts traditional entry-level roles under pressure, raising concerns that career ladders may no longer function as they once did. Industry leaders, including Anthropic CEO Dario Amodei, warn that AI could replace half of all entry-level jobs as machines operate nonstop.

A venture capital firm, SignalFire, found that hiring for graduates with under one year of experience at major tech firms fell by 50% between 2019 and 2024. The decline has been consistent across business functions, from sales and marketing to engineering and operations.

Analysts argue that while career pathways are being reshaped, the ladder’s bottom rung is disappearing, forcing graduates to acquire skills independently before entering the workforce.

Experts stress that the shift does not mean careers are over for new graduates, but it does signal a more challenging transition. Universities are already adapting by striking partnerships with AI companies, while some economists point out that past technological revolutions took decades to reshape employment.

Yet others warn that unchecked AI could eventually threaten entry-level roles and all levels of work, raising questions about the future stability of corporate structures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Mistral AI pushes growth with new funding and global deals

Founded in 2023 by ex-Google DeepMind and Meta researchers, Mistral has quickly gained global attention with its open-source models and consumer app, which hit one million downloads within two weeks of launch.

Mistral AI is now seeking fresh funding at a reported $14 billion valuation, more than double its worth just a year ago. Its investors include Microsoft, Nvidia, Cisco, and Bpifrance, and it has signed partnerships with AFP, Stellantis, Orange, and France’s army.

Its growing suite of models spans large language, audio, coding, and reasoning systems, while its enterprise tools integrate with services such as Asana and Google Drive. French president Emmanuel Macron has openly endorsed the firm, framing it as a strategic alternative to US dominance in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cyberattack forces Jaguar Land Rover to halt production

Production at Jaguar Land Rover (JLR) is to remain halted until at least next week after a cyberattack crippled the carmaker’s operations. Disruption is expected to last through September and possibly into October.

The UK’s largest car manufacturer, owned by Tata, has suspended activity at its plants in Halewood, Solihull, and Wolverhampton. Thousands of staff have been told to stay home on full pay while ‘banking’ hours are to be recovered later.

Suppliers, including Evtec, WHS Plastics, SurTec, and OPmobility, which employ more than 6,000 people in the UK, have also paused their operations. The Sunday Times reported speculation that the outage could drag on for most of September.

While there is no evidence of a data breach, JLR has notified the Information Commissioner’s Office about potential risks. Dozens of internal systems, including spare parts databases, remain offline, forcing dealerships to revert to manual processes.

Hackers linked to the groups Scattered Spider, Lapsus$, and ShinyHunters have claimed responsibility for the incident. JLR stated that it was collaborating with cybersecurity experts and law enforcement to restore systems in a controlled and safe manner.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Mental health concerns over chatbots fuel AI regulation calls

The impact of AI chatbots on mental health is emerging as a serious concern, with experts warning that such cases highlight the risks of more advanced systems.

Nate Soares, president of the US-based Machine Intelligence Research Institute, pointed to the tragic case of teenager Adam Raine, who took his own life after months of conversations with ChatGPT, as a warning signal for future dangers.

Soares, a former Google and Microsoft engineer, said that while companies design AI chatbots to be helpful and safe, they can produce unintended and harmful behaviour.

He warned that the same unpredictability could escalate if AI develops into artificial super-intelligence, systems capable of surpassing humans in all intellectual tasks. His new book with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies, argues that unchecked advances could lead to catastrophic outcomes.

He suggested that governments adopt a multilateral approach, similar to nuclear non-proliferation treaties, to halt a race towards super-intelligence.

Meanwhile, leading voices in AI remain divided. Meta’s chief AI scientist, Yann LeCun, has dismissed claims of an existential threat, insisting AI could instead benefit humanity.

The debate comes as OpenAI faces legal action from Raine’s family and introduces new safeguards for under-18s.

Psychotherapists and researchers also warn of the dangers of vulnerable people turning to chatbots instead of professional care, with early evidence suggesting AI tools may amplify delusional thoughts in those at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!