Social media authenticity questioned as Altman points to bot-like behaviour

Sam Altman, X enthusiast and Reddit shareholder, has expressed doubts over whether social media content can still be distinguished from bot activity. His remarks followed an influx of praise for OpenAI Codex on Reddit, where users questioned whether such posts were genuine.

Altman noted that humans are increasingly adopting quirks of AI-generated language, blurring the line between authentic and synthetic speech. He also pointed to factors such as social media optimisation for engagement and astroturfing campaigns, which amplify suspicions of fakery.

The comments follow OpenAI’s backlash over the rollout of GPT-5, which saw Reddit communities shift from celebratory to critical. Altman acknowledged flaws in a Reddit AMA, but the fallout left lasting scepticism and lower enthusiasm among AI users.

Underlying this debate is the wider reality that bots dominate much of the online environment. Imperva estimates that more than half of 2024’s internet traffic was non-human, while X’s own Grok chatbot admitted to hundreds of millions of bots on the platform.

Some observers suggest Altman’s comments may foreshadow an OpenAI-backed social media venture. Whether such a project could avoid the same bot-related challenges remains uncertain, with research suggesting that even bot-only networks eventually create echo chambers of their own.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Orson Welles lost film reconstructed with AI

More than 80 years after Orson Welles’ The Magnificent Ambersons was cut and lost, AI is being used to restore 43 missing minutes of the film.

Amazon-backed Showrunner, led by Edward Saatchi, is experimenting with AI technology to rebuild the destroyed sequences as part of a broader push to reimagine how Hollywood might use AI in storytelling.

The project is not intended for commercial release, since Showrunner has not secured rights from Warner Bros. or Concord, but instead aims to explore what could have been the director’s original vision.

The initiative marks a shift in the role of AI in filmmaking. Rather than serving only as a tool for effects, dubbing or storyboarding, it is being positioned as a foundation for long-form narrative creation.

Showrunner is developing AI models capable of sustaining complex plots, with the goal of eventually generating entire films. Saatchi envisions the platform as a type of ‘Netflix of AI,’ where audiences might one day interact with intellectual property and generate their own stories.

To reconstruct The Magnificent Ambersons, the company is combining traditional techniques with AI tools. New sequences will be shot with actors, while AI will be used for face and pose transfer to replicate the original cast.

Thousands of archival set photographs are being used to digitally recreate the film’s environments.

Filmmaker Brian Rose, who has rebuilt 30,000 missing frames over five years, has reconstructed set movements and timing to match the lost scenes, while VFX expert Tom Clive will assist in refining the likenesses of the original actors.

A project that underlines both the creative possibilities and ethical tensions surrounding AI in cinema. While the reconstructed footage will not be commercially exploited, it raises questions about the use of copyrighted material in training AI and the risk of replacing human creators.

For many, however, the experiment offers a glimpse of what Welles’ ambitious work might have looked like had it survived intact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mental health concerns over chatbots fuel AI regulation calls

The impact of AI chatbots on mental health is emerging as a serious concern, with experts warning that such cases highlight the risks of more advanced systems.

Nate Soares, president of the US-based Machine Intelligence Research Institute, pointed to the tragic case of teenager Adam Raine, who took his own life after months of conversations with ChatGPT, as a warning signal for future dangers.

Soares, a former Google and Microsoft engineer, said that while companies design AI chatbots to be helpful and safe, they can produce unintended and harmful behaviour.

He warned that the same unpredictability could escalate if AI develops into artificial super-intelligence, systems capable of surpassing humans in all intellectual tasks. His new book with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies, argues that unchecked advances could lead to catastrophic outcomes.

He suggested that governments adopt a multilateral approach, similar to nuclear non-proliferation treaties, to halt a race towards super-intelligence.

Meanwhile, leading voices in AI remain divided. Meta’s chief AI scientist, Yann LeCun, has dismissed claims of an existential threat, insisting AI could instead benefit humanity.

The debate comes as OpenAI faces legal action from Raine’s family and introduces new safeguards for under-18s.

Psychotherapists and researchers also warn of the dangers of vulnerable people turning to chatbots instead of professional care, with early evidence suggesting AI tools may amplify delusional thoughts in those at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ITU warns global Internet access by 2030 could cost nearly USD 2.8 trillion

Universal Internet connectivity by 2030 could cost up to $2.8 trillion, according to the International Telecommunication Union (ITU) and Saudi Arabia’s Communications, Space, and Technology (CST) Commission. The blueprint urges global cooperation to connect the one-third of humanity still offline.

The largest share, up to $1.7 trillion, would be allocated to expanding broadband through fibre, wireless, and satellite networks. Nearly $1 trillion is needed for affordability measures, alongside $152 billion for digital skills programmes.

ITU Secretary-General Doreen Bogdan-Martin emphasised that connectivity is essential for access to education, employment, and vital services. She noted the stark divide between high-income countries, where 93% of people are online, and low-income states, where only 27% use the Internet.

The study shows costs have risen fivefold since ITU’s 2020 Connecting Humanity report, reflecting both higher demand and widening divides. Haytham Al-Ohali from Saudi Arabia said the figures underscore the urgency of investment and knowledge sharing to achieve meaningful connectivity.

The report recommends new business models and stronger cooperation between governments, industry, and civil society. Proposed measures include using schools as Internet gateways, boosting Africa’s energy infrastructure, and improving localised data collection to accelerate digital inclusion.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Latvia launches open AI framework for Europe

Language technology company Tilde has released an open AI framework designed for all European languages.

The model, named ‘TildeOpen’, was developed with the support of the European Commission and trained on the Lumi supercomputer in Finland.

According to Tilde’s head Artūrs Vasiļevskis, the project addresses a key gap in US-based AI systems, which often underperform for smaller European languages such as Latvian. By focusing on European linguistic diversity, the framework aims to provide better accessibility across the continent.

Vasiļevskis also suggested that Latvia has the potential to become an exporter of AI solutions. However, he acknowledged that development is at an early stage and that current applications remain relatively simple. The framework and user guidelines are freely accessible online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers develop an AI system to modify the brain’s mental imagery with words

A new AI system named DreamConnect can now translate a person’s brain activity into images and then edit those mental pictures using natural language commands.

Instead of merely reconstructing thoughts from fMRI scans, the breakthrough technology allows users to reshape their imagined scenes actively. For instance, an individual visualising a horse can instruct the system to transform it into a unicorn, with the AI accurately modifying the relevant features.

The system employs a dual-stream framework that interprets brain signals into rough visuals and then refines them based on text instructions.

Developed by an international team of researchers, DreamConnect represents a fundamental shift from passive brain decoding to interactive visual brainstorming.

It marks a significant advance at the frontier of human-AI interaction, moving beyond simple reconstruction to active collaboration.

Potential applications are wide-ranging, from accelerating creative design to offering new tools for therapeutic communication.

However, the researchers caution that such powerful technology necessitates robust ethical safeguards to prevent misuse and protect the privacy of an individual’s most personal data, their thoughts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk’s influence puts Grok at the centre of AI bias debate

Elon Musk’s AI chatbot, Grok, has faced repeated changes to its political orientation, with updates shifting its answers towards more conservative views.

xAI, Musk’s company, initially promoted Grok as neutral and truth-seeking, but internal prompts have steered it on contentious topics. Adjustments included portraying declining fertility as the greatest threat to civilisation and downplaying right-wing violence.

Analyses of Grok’s responses by The New York Times showed that the July updates shifted answers to the right on government and economy, while some social responses remained left-leaning. Subsequent tweaks pulled it back closer to neutrality.

Critics say that system prompts, such as short instructions like ‘be politically incorrect’, make it easy to adjust outputs, but also leave the model prone to erratic or offensive responses. A July update saw Grok briefly endorse a controversial historical figure before xAI turned it off.

The case highlights growing concerns about political bias in AI systems. Researchers argue that all chatbots reflect the worldviews of their training data, while companies increasingly face pressure to align them with user expectations or political demands.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated media must now carry labels in China

China has introduced a sweeping new law that requires all AI-generated content online to carry labels. The measure, which came into effect on 1 September, aims to tackle misinformation, fraud and copyright infringement by ensuring greater transparency in digital media.

The law, first announced in March by the Cyberspace Administration of China, mandates that all AI-created text, images, video and audio must carry explicit and implicit markings.

These include visible labels and embedded metadata such as watermarks in files. Authorities argue that the rules will help safeguard users while reinforcing Beijing’s tightening grip over online spaces.

Major platforms such as WeChat, Douyin, Weibo and RedNote moved quickly to comply, rolling out new features and notifications for their users. The regulations also form part of the Qinglang campaign, a broader effort by Chinese authorities to clean up online activity with a strong focus on AI oversight.

While Google and other US companies are experimenting with content authentication tools, China has enacted legally binding rules nationwide.

Observers suggest that other governments may soon follow, as global concern about the risks of unlabelled AI-generated material grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT safety checks may trigger police action

OpenAI has confirmed that ChatGPT conversations signalling a risk of serious harm to others can be reviewed by human moderators and may even reach the police.

The company explained these measures in a blog post, stressing that its system is designed to balance user privacy with public safety.

The safeguards treat self-harm differently from threats to others. When a user expresses suicidal intent, ChatGPT directs them to professional resources instead of contacting law enforcement.

By contrast, conversations showing intent to harm someone else are escalated to trained moderators, and if they identify an imminent risk, OpenAI may alert authorities and suspend accounts.

The company admitted its safety measures work better in short conversations than in lengthy or repeated ones, where safeguards can weaken.

OpenAI is working to strengthen consistency across interactions and developing parental controls, new interventions for risky behaviour, and potential connections to professional help before crises worsen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Age verification law in Mississipi test the limits of decentralised social media

A new Mississippi law (HB 1126), requiring age verification for all social media users, has sparked controversy over internet freedom and privacy. Bluesky, a decentralised social platform, announced it would block access in the state rather than comply, citing limited resources and concerns about the law’s broad scope.

The law imposes heavy fines, up to $10,000 per user, for non-compliance. Bluesky argued that the required technical changes are too demanding for a small team and raise significant privacy concerns. After the US Supreme Court declined to block the law while legal challenges proceed, platforms like Bluesky are now forced to make difficult decisions.

According to TechCrunch, users in the US state began seeking ways to bypass the restriction, most commonly by using VPNs, which can hide their location and make it appear as though they are accessing the internet from another state or country.

However, some questioned why such measures were necessary. The idea behind decentralised social networks like Bluesky is to reduce control by central authorities, including governments. So if a decentralised platform can still be restricted by state laws or requires workarounds like VPNs, it raises questions about how truly ‘decentralised’ or censorship-resistant these platforms are.

Some users in Mississippi are still accessing Bluesky despite the new law. Many use third-party apps like Graysky or sideload the app via platforms like AltStore. Others rely on forked apps or read-only tools like Anartia.

While decentralisation complicates enforcement, these workarounds may not last, as developers risk legal consequences. Bluesky clients that do not run their own data servers (PDS) might not be directly affected, but explaining this in court is complex.

Broader laws tend to favour large platforms that can afford compliance, while smaller services like Bluesky are often left with no option but to block access or withdraw entirely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot