Social media authenticity questioned as Altman points to bot-like behaviour

Sam Altman, X enthusiast and Reddit shareholder, has expressed doubts over whether social media content can still be distinguished from bot activity. His remarks followed an influx of praise for OpenAI Codex on Reddit, where users questioned whether such posts were genuine.

Altman noted that humans are increasingly adopting quirks of AI-generated language, blurring the line between authentic and synthetic speech. He also pointed to factors such as social media optimisation for engagement and astroturfing campaigns, which amplify suspicions of fakery.

The comments follow OpenAI’s backlash over the rollout of GPT-5, which saw Reddit communities shift from celebratory to critical. Altman acknowledged flaws in a Reddit AMA, but the fallout left lasting scepticism and lower enthusiasm among AI users.

Underlying this debate is the wider reality that bots dominate much of the online environment. Imperva estimates that more than half of 2024’s internet traffic was non-human, while X’s own Grok chatbot admitted to hundreds of millions of bots on the platform.

Some observers suggest Altman’s comments may foreshadow an OpenAI-backed social media venture. Whether such a project could avoid the same bot-related challenges remains uncertain, with research suggesting that even bot-only networks eventually create echo chambers of their own.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Fake GitHub downloads deliver GPUGate malware to EU IT staff

A malvertising campaign is targeting IT workers in the EU with fake GitHub Desktop installers, according to Arctic Wolf. The goal is to steal credentials, deploy ransomware, and infiltrate sensitive systems. The operation has reportedly been active for over six months.

Attackers used malicious Google Ads that redirected users to doctored GitHub repositories. Modified README files mimicked genuine download pages but linked to a lookalike domain. MacOS users received the AMOS Stealer, while Windows victims downloaded bloated installers hiding malware.

The Windows malware evaded detection using GPU-based checks, refusing to run in sandboxes that lacked real graphics drivers. On genuine machines, it copied itself to %APPDATA%, sought elevated privileges, and altered Defender settings. Analysts dubbed the technique GPUGate.

The payload persisted by creating privileged tasks and sideloading malicious DLLs into legitimate executables. Its modular system could download extra malware tailored to each victim. The campaign was geo-fenced to EU targets and relied on redundant command servers.

Researchers warn that IT staff are prime targets due to their access to codebases and credentials. With the campaign still active, Arctic Wolf has published indicators of compromise, Yara rules, and security advice to mitigate the GPUGate threat.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Orson Welles lost film reconstructed with AI

More than 80 years after Orson Welles’ The Magnificent Ambersons was cut and lost, AI is being used to restore 43 missing minutes of the film.

Amazon-backed Showrunner, led by Edward Saatchi, is experimenting with AI technology to rebuild the destroyed sequences as part of a broader push to reimagine how Hollywood might use AI in storytelling.

The project is not intended for commercial release, since Showrunner has not secured rights from Warner Bros. or Concord, but instead aims to explore what could have been the director’s original vision.

The initiative marks a shift in the role of AI in filmmaking. Rather than serving only as a tool for effects, dubbing or storyboarding, it is being positioned as a foundation for long-form narrative creation.

Showrunner is developing AI models capable of sustaining complex plots, with the goal of eventually generating entire films. Saatchi envisions the platform as a type of ‘Netflix of AI,’ where audiences might one day interact with intellectual property and generate their own stories.

To reconstruct The Magnificent Ambersons, the company is combining traditional techniques with AI tools. New sequences will be shot with actors, while AI will be used for face and pose transfer to replicate the original cast.

Thousands of archival set photographs are being used to digitally recreate the film’s environments.

Filmmaker Brian Rose, who has rebuilt 30,000 missing frames over five years, has reconstructed set movements and timing to match the lost scenes, while VFX expert Tom Clive will assist in refining the likenesses of the original actors.

A project that underlines both the creative possibilities and ethical tensions surrounding AI in cinema. While the reconstructed footage will not be commercially exploited, it raises questions about the use of copyrighted material in training AI and the risk of replacing human creators.

For many, however, the experiment offers a glimpse of what Welles’ ambitious work might have looked like had it survived intact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI backs AI-generated film Critterz for 2026 release

OpenAI is supporting the production of Critterz, an AI-assisted animated film set for a global theatrical release in 2026. The project aims to show that AI can streamline filmmaking, cutting costs and production time.

Partnering with Vertigo Films and Native Foreign, the film is being produced in nine months, far faster than the usual three years for animated features.

The film, budgeted under $30 million, combines OpenAI’s GPT-5 and DALL·E with traditional voice acting and hand-drawn elements. Building on the acclaimed 2023 short, Critterz will debut at the Cannes Film Festival and expand on a story where humans and AI creatures share the same world.

Writers James Lamont and Jon Foster, known for Paddington in Peru, have been brought in to shape the screenplay.

While producers highlight AI’s creative potential, concerns remain about authenticity and job security in the industry. Some fear AI films could feel impersonal, while major studios continue to defend intellectual property.

Warner Bros., Disney, and Universal are in court with Midjourney over alleged copyright violations.

Despite the debate, OpenAI remains committed to its role in pushing generative storytelling. The company is also expanding its infrastructure, forecasting spending of $115 billion by 2029, with $8 billion planned for this year alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Conti and LockBit dominate ransomware landscape with record attacks

Ransomware groups have evolved into billion-dollar operations targeting critical infrastructure across multiple countries, employing increasingly sophisticated extortion schemes. Between 2020 and 2022, more than 865 documented attacks were recorded across Australia, Canada, New Zealand, and the UK.

Criminals have escalated from simple encryption to double and triple extortion, threatening to leak stolen data as added leverage. Attack vectors include phishing, botnets, and unpatched flaws. Once inside, attackers use stealthy tools to persist and spread.

BlackSuit, formerly known as Conti, led with 141 attacks, followed by LockBit’s 129, according to data from the Australian Institute of Criminology. Ransomware-as-a-Service groups hit higher volumes by splitting developers from affiliates handling breaches and negotiations.

Industrial targets bore the brunt, with 239 attacks on manufacturing and building products. The consumer goods, real estate, financial services, and technology sectors also featured prominently. Analysts note that industrial firms are often pressured into quick ransom payments to restore production.

Experts warn that today’s ransomware combines military-grade encryption with advanced reconnaissance and backup targeting, raising the stakes for defenders. The scale of activity underscores how resilient these groups remain, adapting rapidly to law enforcement crackdowns and shifting market opportunities.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Mental health concerns over chatbots fuel AI regulation calls

The impact of AI chatbots on mental health is emerging as a serious concern, with experts warning that such cases highlight the risks of more advanced systems.

Nate Soares, president of the US-based Machine Intelligence Research Institute, pointed to the tragic case of teenager Adam Raine, who took his own life after months of conversations with ChatGPT, as a warning signal for future dangers.

Soares, a former Google and Microsoft engineer, said that while companies design AI chatbots to be helpful and safe, they can produce unintended and harmful behaviour.

He warned that the same unpredictability could escalate if AI develops into artificial super-intelligence, systems capable of surpassing humans in all intellectual tasks. His new book with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies, argues that unchecked advances could lead to catastrophic outcomes.

He suggested that governments adopt a multilateral approach, similar to nuclear non-proliferation treaties, to halt a race towards super-intelligence.

Meanwhile, leading voices in AI remain divided. Meta’s chief AI scientist, Yann LeCun, has dismissed claims of an existential threat, insisting AI could instead benefit humanity.

The debate comes as OpenAI faces legal action from Raine’s family and introduces new safeguards for under-18s.

Psychotherapists and researchers also warn of the dangers of vulnerable people turning to chatbots instead of professional care, with early evidence suggesting AI tools may amplify delusional thoughts in those at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google avoids breakup as court ruling fuels AI Mode expansion

A US district judge has declined to order a breakup of Google, softening the blow of a 2024 ruling that found the company had illegally monopolised online search.

The decision means Google can press ahead with its shift from a search engine into an answer engine, powered by generative AI.

Google’s AI Mode replaces traditional blue links with direct responses to queries, echoing the style of ChatGPT. While the feature is optional for now, it could become the default.

That alarms publishers, who depend on search traffic for advertising revenue. Studies suggest chatbots reduce referral clicks by more than 90 percent, leaving many sites at risk of collapse.

Google is also experimenting with inserting ads into AI Mode, though it remains unclear how much revenue will flow to content creators. Websites can block their data from being scraped, but doing so would also remove them from Google search entirely.

Despite these concerns, Google argues that competition from ChatGPT, Perplexity, and other AI tools shows that new rivals are reshaping the search landscape.

The judge even cited the emergence of generative AI as a factor that altered the case against Google, underlining how the rise of AI has become central to the future of the internet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nigeria sets sights on top 50 AI-ready nations

Nigeria has pledged to become one of the top 50 AI-ready nations, according to presidential adviser Hadiza Usman. Speaking in Abuja at a colloquium on AI policy, she said the country needs strong leadership, investment, and partnerships to meet its goals.

She stressed that policies must address Nigeria’s unique challenges and not simply replicate foreign models. The government will offer collaboration opportunities with local institutions and international partners.

The Nigerian Deposit Insurance Commission reinforced its support, noting that technology should secure depositors without restricting innovators.

Private sector voices said AI could transform healthcare, agriculture, and public services if policies are designed with inclusion and trust in mind.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ITU warns global Internet access by 2030 could cost nearly USD 2.8 trillion

Universal Internet connectivity by 2030 could cost up to $2.8 trillion, according to the International Telecommunication Union (ITU) and Saudi Arabia’s Communications, Space, and Technology (CST) Commission. The blueprint urges global cooperation to connect the one-third of humanity still offline.

The largest share, up to $1.7 trillion, would be allocated to expanding broadband through fibre, wireless, and satellite networks. Nearly $1 trillion is needed for affordability measures, alongside $152 billion for digital skills programmes.

ITU Secretary-General Doreen Bogdan-Martin emphasised that connectivity is essential for access to education, employment, and vital services. She noted the stark divide between high-income countries, where 93% of people are online, and low-income states, where only 27% use the Internet.

The study shows costs have risen fivefold since ITU’s 2020 Connecting Humanity report, reflecting both higher demand and widening divides. Haytham Al-Ohali from Saudi Arabia said the figures underscore the urgency of investment and knowledge sharing to achieve meaningful connectivity.

The report recommends new business models and stronger cooperation between governments, industry, and civil society. Proposed measures include using schools as Internet gateways, boosting Africa’s energy infrastructure, and improving localised data collection to accelerate digital inclusion.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI boss, Sam Altman, fuels debate over dead internet theory

Sam Altman, chief executive of OpenAI, has suggested that the so-called ‘dead internet theory’ may hold some truth. The idea, long dismissed as a conspiracy theory, claims much of the online world is now dominated by computer-generated content rather than real people.

Altman noted on X that he had not previously taken the theory seriously but believed there were now many accounts run by large language models.

His remark drew criticism from users who argued that OpenAI itself had helped create the problem by releasing ChatGPT in 2022, which triggered a surge of automated content.

The spread of AI systems has intensified debate over whether online spaces are increasingly filled with artificially generated voices.

Some observers also linked Altman’s comments to his work on World Network, formerly Worldcoin, a project launched in 2019 to verify human identity online through biometric scans. That initiative has been promoted as a potential safeguard against the growing influence of AI-driven systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!