Altman warns of harmful AI use after model backlash

OpenAI chief executive Sam Altman has warned that many ChatGPT users are engaging with AI in self-destructive ways. His comments follow backlash over the sudden discontinuation of GPT-4o and other older models, which he admitted was a mistake.

Altman said that users form powerful attachments to specific AI models, and while most can distinguish between reality and fiction, a small minority cannot. He stressed OpenAI’s responsibility to manage the risks for those in mentally fragile states.

Using ChatGPT as a therapist or life coach was not his concern, as many people already benefit from it. Instead, he worried about cases where advice subtly undermines a user’s long-term well-being.

The model removals triggered a huge social-media outcry, with complaints that newer versions offered shorter, less emotionally rich responses. OpenAI has since restored GPT-4o for Plus subscribers, while free users will only have access to GPT-5.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Instagram Map lets users share location with consent

Instagram has introduced an opt-in feature called Instagram Map, allowing users in the US to share their recent active location and explore location-based content.

Adam Mosseri, head of Instagram, clarified that location sharing is off by default and visible only when users choose to share.

Confusion arose as some users mistakenly believed their location was automatically shared because they could see themselves on the map upon opening the app.

The feature also displays location tags from Stories or Reels, making location-based content easier to find.

Unlike Snap Map, Instagram Map updates location only when the app is open or running in the background, without providing continuous real-time tracking.

Users can access the Map by going to their direct messages and selecting the Map option, where they can control who sees their location, choosing between Friends, Close Friends, selected users, or no one. Even if location sharing is turned off, users will still see the locations of others who share with them.

Instagram Map shows friends’ shared locations and nearby Stories or Reels tagged with locations, allowing users to discover events or places through their network.

Additionally, users can post short, temporary messages called Notes, which appear on the map when shared with a location. The feature encourages cautious consideration about sharing location tags in posts, especially when still at the tagged place.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GitHub CEO says developers will manage AI agents

GitHub’s CEO, Thomas Dohmke, envisions a future where developers no longer write code by hand but oversee AI agents that generate it. He highlights that many developers already use AI tools to assist with coding tasks.

Early adoption began with debugging, boilerplate and code snippets, and evolved into collaborative brainstorming and iterative prompting with AI. Developers are now learning to treat AI tools like partners and guide their ‘thought processes’.

According to interviews with 22 developers, half expect AI to write around 90 percent of their code within two years, while the rest foresee that happening within five. The shift is seen as a change from writing to verifying and refining AI-generated work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Article 19 report finds Belarus’s ‘anti-extremism’ laws threaten digital rights

Digital rights activist group Article 19 has found in its recent report that Belarus’s ‘anti-extremist’ and ‘anti-terrorist’ laws are repressing digital rights.

The report reveals that authorities have misused these laws to prosecute individuals for leaving online comments, making donations, or sharing songs or memes that appear to carry critical messages towards the government.

Since the 2020–2021 protests, Belarusian de facto authorities have reportedly initiated at least 22,500 criminal cases related to ‘anti-extremism’. In collaboration with our partner Human Constanta, we present a joint analysis highlighting this alarming trend, which further intensifies the widespread repression of civil society, they said.

Article 19 states in its report that such actions restrict digital rights and violate international human rights law, including the right to freedom of expression and the right to seek, receive, and impart information.

Additionally, Article 19 notes that Belarus’s ‘anti-extremism’ laws lack the clarity required under international human rights standards, employing vague terms broadly interpreted to suppress digital expression and create a chilling effect.

However, this means people are discouraged or prevented from legitimate expression or behaviour due to fear of legal punishment or other negative consequences.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI-generated video misleads as tsunami footage in Japan

An 8.8-magnitude earthquake off Russia’s Kamchatka peninsula at the end of July triggered tsunami warnings across the Pacific, including Japan. Despite widespread alerts and precautionary evacuations, the most significant wave recorded in Japan was only 1.3 metres high.

A video showing large waves approaching a Japanese coastline, which went viral with over 39 million views on platforms like Facebook and TikTok, was found to be AI-generated and not genuine footage.

The clip, appearing as if filmed from a plane, was initially posted online months earlier by a YouTube channel specialising in synthetic visuals.

Analysis of the video revealed inconsistencies, including unnatural water movements and a stationary plane, confirming it was fabricated. Additionally, numerous Facebook pages shared the video and linked it to commercial sites, spreading misinformation.

Official reports from Japanese broadcasters confirmed that the actual tsunami waves were much smaller, and no catastrophic damage occurred.

The incident highlights ongoing challenges in combating AI-generated disinformation related to natural disasters, as similar misleading content continues to circulate online during crisis events.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Warner Bros Discovery targets password sharing on Max

Warner Bros. Discovery is preparing to aggressively limit password sharing on its Max streaming platform, beginning next month and escalating throughout 2025. The move aims to turn shared users into paying subscribers, following Netflix and Disney+ strategies.

The company plans to deploy technology that detects unusual login activity, such as access from multiple locations. Users will get gentle warnings before stricter actions like suspensions or paid upgrades are enforced.

The initiative seeks to boost revenue and reduce subscriber churn in an increasingly competitive streaming market.

While concerns remain about user dissatisfaction and possible cancellations, Warner Bros. Discovery is confident that its extensive library of popular content, including HBO, DC, and Discovery titles, will encourage loyalty.

The goal is to create a sustainable revenue model that directly supports investments in original programming.

Industry observers note that Max’s crackdown reflects broader streaming trends, where enforcing account integrity becomes essential to growth. The full impact will be clear by the end of 2025, possibly shaping future subscription management.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK’s MP created AI bot aiming to enhance communication with constituents

AI has become increasingly integrated into people’s lives in recent years, particularly through the use of chatbots and in ways previously unimaginable. One such example is the initiative taken by UK Member of Parliament Mark Sewards, who has created an AI bot of himself to interact with constituents.

Specifically, Labour’s Mark Sewards has partnered with an AI start-up to launch a virtual avatar that uses his voice, allowing constituents to raise local concerns and ask policy-related questions. While this may appear to offer a quicker and more convenient means of communication, opinions are divided.

On one hand, there are concerns around privacy, data security, a lack of human interaction, and the chatbot’s ability to resolve more complex issues. Dr Oman from the University of Sheffield warns that older users may not realise they are speaking to a bot, which could lead to confusion and distress.

Professor Victoria Honeyman from the University of Leeds notes that, while the bot can handle straightforward queries and free up time, it may cause upset when users are dealing with emotional or complicated matters, potentially undermining public trust in MPs and public services.

At the same time, Mark Sewards emphasised that the chatbot will not replace traditional methods such as advice surgeries. However, Sewards stated that he sees the project as a way to embrace emerging technology and improve accessibility.

Professor Honeyman added that, although it is not a complete substitute for face-to-face engagement, the chatbot signals a broader shift in how MPs connect with the public and could prove effective with further development and adaptation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Tech giants under fire in Australia for failing online child protection standards

Australia’s eSafety commissioner report showed that tech giants, including Apple, Google, Meta, and Microsoft, have failed to act against online child sexual abuse. Namely, it was found that Apple and YouTube do not track the number of abuse reports they receive or how quickly they respond, raising serious concerns. Additionally, both companies failed to disclose the number of trust and safety staff they employ, highlighting ongoing transparency and accountability issues in protecting children online.

In July 2024, the eSafety Commissioner of Australia took action by issuing legally enforceable notices to major tech companies, pressuring them to improve their response to child sexual abuse online.

These notices legally require recipients to comply within a set timeframe. Under the order, each companies were required to report eSafety every six months over a two-year period, detailing their efforts to combat child sexual abuse material, livestreamed abuse, online grooming, sexual extortion, and AI-generated content.

While these notices were issued in 2022 and 2023, there has been minimal effort by the companies to take action to prevent such crimes, according to Australia’s eSafety Commissioner Julie Inman Grant.

Key findings from the eSafety commissioner are:

  • Apple did not use hash-matching tools to detect known CSEA images on iCloud (which was opt-in, end-to-end encrypted) and did not use hash-matching tools to detect known CSEA videos on iCloud or iCloud email. For iMessage and FaceTime (which were end-to-end encrypted), Apple only used Communication Safety, Apple’s safety intervention to identify images or videos that likely contain nudity, as a means of ‘detecting’ CSEA.
  • Discord did not use hash-matching tools for known CSEA videos on any part of the service (despite using hash-matching tools for known images and tools to detect new CSEA material).
  • Google did not use hash-matching tools to detect known CSEA images on Google Messages (end-to-end encrypted), nor did it detect known CSEA videos on Google Chat, Google Messages, or Gmail.
  • Microsoft did not use hash-matching tools for known CSEA images stored on OneDrive18, nor did it use hash-matching tools to detect known videos within content stored on OneDrive or Outlook.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Trump Media trials new AI search engine with help from Perplexity

Trump Media and Technology Group has begun testing a new AI-powered search engine called Truth Search AI on its Truth Social platform.

Developed in partnership with AI company Perplexity, the feature is intended to enhance access to information for users of the platform.

Devin Nunes, CEO and Chairman of Trump Media, said the tool will strengthen Truth Social’s position in the so-called ‘Patriot Economy’.

Perplexity’s Chief Business Officer, Dmitry Shevelenko, added that the collaboration brings powerful AI to users who are seeking answers to significant questions.

The search engine is already live on the platform and has responded to politically sensitive queries with measured language.

When asked whether Donald Trump was a liar, the tool noted that the label often depends on context, but acknowledged that fact-checkers have documented many misleading claims.

A similar question about Nancy Pelosi prompted the response that such a claim was partisan rather than factual.

Trump Media plans to expand the feature to its iOS and Android apps shortly. The launch is part of a wider strategy to broaden the company’s digital offerings, which also include ventures in cryptocurrency and finance, such as a proposed Bitcoin ETF in partnership with Crypto.com and Yorkville America Digital.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US agencies to adopt ChatGPT to modernise government operations

The US government has finalised a deal with OpenAI to integrate ChatGPT Enterprise across all federal agencies. Each agency will access ChatGPT for $1 to support AI adoption and modernise operations.

According to the General Services Administration, the move aligns with the White House’s AI Action Plan, which aims to make the US a global leader in AI development. The plan promotes AI integration, innovation, and regulation across public institutions.

However, privacy advocates and cybersecurity experts have raised concerns over the risks of centralised AI in government. Critics cite the potential for mass surveillance, narrative control, and sensitive data exposure.

Sam Altman, CEO of OpenAI, has cautioned users that AI conversations are not protected under privacy laws and could be used in legal proceedings. Storing data on centralised servers via large language models raises concerns over civil liberties and government overreach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot