New ChatGPT model reduces unsafe replies by up to 80%

OpenAI has updated ChatGPT’s default model after working with more than 170 mental health clinicians to help the system better spot distress, de-escalate conversations and point users to real-world support.

The update routes sensitive exchanges to safer models, expands access to crisis hotlines and adds gentle prompts to take breaks, aiming to reduce harmful responses rather than simply offering more content.

Measured improvements are significant across three priority areas: severe mental health symptoms such as psychosis and mania, self-harm and suicide, and unhealthy emotional reliance on AI.

OpenAI reports that undesired responses fell between 65 and 80 percent in production traffic and that independent clinician reviews show significant gains compared with earlier models. At the same time, rare but high-risk scenarios remain a focus for further testing.

The company used a five-step process to shape the changes: define harms, measure them, validate approaches with experts, mitigate risks through post-training and product updates, and keep iterating.

Evaluations combine real-world traffic estimates with structured adversarial tests, so better ChatGPT safeguards are in place now, and further refinements are planned as understanding and measurement methods evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Celebrity estates push back on Sora as app surges to No.1

OpenAI’s short-video app Sora topped one million downloads in under a week, then ran headlong into a likeness-rights firestorm. Celebrity families and studios demanded stricter controls. Estates for figures like Martin Luther King Jr. sought blocks on unauthorised cameos.

Users showcased hyperreal mashups that blurred satire and deception, from cartoon crossovers to dead celebrities in improbable scenes. All clips are AI-made, yet reposting across platforms spread confusion. Viewers faced a constant real-or-fake dilemma.

Rights holders pressed for consent, compensation, and veto power over characters and personas. OpenAI shifted toward opt-in for copyrighted properties and enabled estate requests to restrict cameos. Policy language on who qualifies as a public figure remains fuzzy.

Agencies and unions amplified pressure, warning of exploitation and reputational risks. Detection firms reported a surge in takedown requests for unauthorised impersonations. Watermarks exist, but removal tools undercut provenance and complicate enforcement.

Researchers warned about a growing fog of doubt as realistic fakes multiply. Every day, people are placed in deceptive scenarios, while bad actors exploit deniability. OpenAI promised stronger guardrails as Sora scales within tighter rules.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN cybercrime treaty signed in Hanoi amid rights concerns

Around 73 countries signed a landmark UN cybercrime convention in Hanoi, seeking faster cooperation against online crime. Leaders cited trillions in annual losses from scams, ransomware, and trafficking. The pact enters into force after 40 ratifications.

UN supporters say the treaty will streamline evidence sharing, extradition requests, and joint investigations. Provisions target phishing, ransomware, online exploitation, and hate speech. Backers frame the deal as a boost to global security.

Critics warn the text’s breadth could criminalise security research and dissent. The Cybersecurity Tech Accord called it a surveillance treaty. Activists fear expansive data sharing with weak safeguards.

The UNODC argues the agreement includes rights protections and space for legitimate research. Officials say oversight and due process remain essential. Implementation choices will decide outcomes on the ground.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MLK estate pushback prompts new Sora 2 guardrails at OpenAI

OpenAI paused the ability to re-create Martin Luther King Jr. in Sora 2 after Bernice King objected to user videos. Company leaders issued a joint statement with the King estate. New guardrails will govern depictions of historical figures on the app.

OpenAI said families and authorised estates should control how likenesses appear. Representatives can request removal or opt-outs. Free speech was acknowledged, but respectful use and consent were emphasised.

Policy scope remains unsettled, including who counts as a public figure. Case-by-case requests may dominate early enforcement. Transparency commitments arrived without full definitions or timelines.

Industry pressure intensified as major talent agencies opted out of clients. CAA and UTA cited exploitation and legal exposure. Some creators welcomed the tool, showing a split among public figures.

User appetite for realistic cameos continues to test boundaries. Rights of publicity and postmortem controls vary by state. OpenAI promised stronger safeguards while Sora 2 evolves.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Church of Greece launches AI tool LOGOS for believers

LOGOS, a digital tool developed by the Metropolis of Nea Ionia, Filadelfia, Iraklio and Halkidona alongside the University of the Aegean, has marked the Church of Greece’s entry into the age of AI.

The tool gathers information on questions of Christian faith and provides clear, practical answers instead of replacing human guidance.

Metropolitan Gabriel, who initiated the project, emphasised that LOGOS does not substitute priests but acts as a guide, bringing believers closer to the Church. He said the Church must engage the digital world, insisting that technology should serve humanity instead of the other way around.

An AI tool that also supports younger users, allowing them to safely access accurate information on Orthodox teachings and counter misleading or harmful content found online. While it cannot receive confessions, it offers prayers and guidance to prepare believers spiritually.

The Church views LOGOS as part of a broader strategy to embrace digital tools responsibly, ensuring that faith remains accessible and meaningful in the modern technological landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI rolls out pet-centric AI video features and social tools in Sora

OpenAI has announced significant enhancements to its text-to-video app Sora. The update introduces new features including pet and object ‘cameos’ in AI-generated videos, expanded video editing tools, social sharing elements and a forthcoming Android version of the app.

Using the new pet cameo feature, users will be able to upload photos of their pets or objects and then incorporate them into animated video scenes generated by Sora. The objective is to deepen personalisation and creative expression by letting users centre their own non-human characters.

Sora is also gaining editing capabilities that simplify the creation process. Users can remix existing videos, apply stylistic changes, and integrate social-type features like feeds where others’ creations can be viewed and shared. The Android app is noted as ‘coming soon’ which expands Sora’s accessibility beyond the iOS/web initial release.

The move reflects OpenAI’s strategy to transition Sora from an experimental novelty into a more fully featured social video product. By enabling user-owned content (pets/objects), expanding sharing functionality and broadening platform reach, Sora is positioned to compete in the generative video and social media landscape.

At the same time, the update raises questions around content use, copyright (especially when user-owned pets or objects are included), deepfake risks, and moderation. Given Sora’s prior scrutiny over synthetic media, the expansion into more personalised video may prompt further regulatory or ethical review.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Diella 2.0 set to deliver 83 new AI assistants to aid Albania’s MPs

Albania’s AI minister Diella will ‘give birth’ to 83 virtual assistants for ruling-party MPs, Prime Minister Edi Rama said, framing a quirky rollout of parliamentary copilots that record debates and propose responses.

Diella began in January as a public-service chatbot on e-Albania, then ‘Diella 2.0’ added voice and an avatar in traditional dress. Built with Microsoft by the National Agency for Information Society, it now oversees specific state tech contracts.

The legality is murky: the constitution of Albania requires ministers to be natural persons. A presidential decree left Rama’s responsibility to establish the role and set up likely court tests from opposition lawmakers.

Rama says the ‘children’ will brief MPs, summarise absences, and suggest counterarguments through 2026, experimenting with automating the day-to-day legislative grind without replacing elected officials.

Reactions range from table-thumping scepticism to cautious curiosity, as other governments debate AI personhood and limits; Diella could become a template, or a cautionary tale for ‘ministerial’ bots.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI deepfake videos spark ethical and environmental concerns

Deepfake videos created by AI platforms like OpenAI’s Sora have gone viral, generating hyper-realistic clips of deceased celebrities and historical figures in often offensive scenarios.

Families of figures like Dr Martin Luther King Jr have publicly appealed to AI firms to prevent using their loved ones’ likenesses, highlighting ethical concerns around the technology.

Beyond the emotional impact, Dr Kevin Grecksch of Oxford University warns that producing deepfakes carries a significant environmental footprint. Instead of occurring on phones, video generation happens in data centres that consume vast amounts of electricity and water for cooling, often at industrial scales.

The surge in deepfake content has been rapid, with Sora downloaded over a million times in five days. Dr Grecksch urges users to consider the environmental cost, suggesting more integrated thinking about where data centres are built and how they are cooled to minimise their impact.

As governments promote AI growth areas like South Oxfordshire, questions remain over sustainable infrastructure. Users are encouraged to balance technological enthusiasm with environmental mindfulness, recognising the hidden costs behind creating and sharing AI-generated media.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU investigates Meta and TikTok for DSA breaches

The European Commission has accused Meta and TikTok of breaching the Digital Services Act (DSA), highlighting failures in handling illegal content and providing researchers access to public data.

Meta’s Facebook and Instagram were found to make it too difficult for users to report illegal content or receive responses to complaints, the Commission said in its preliminary findings.

Investigations began after complaints to Ireland’s content regulator, where Meta’s EU base is located. The Commission’s inquiry, which has been ongoing since last year, aims to ensure that large platforms protect users and meet EU safety obligations.

Meta and TikTok can submit counterarguments before penalties of up to six percent of global annual turnover are imposed.

Both companies face separate concerns about denying researchers adequate access to platform data and preventing oversight of systemic online risks. TikTok is under further examination for minor protection and advertising transparency issues.

The Commission has launched 14 such DSA-related proceedings, none concluded.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia rules out AI copyright exemption

The Albanese Government has confirmed that it will not introduce a Text and Data Mining Exception in Australia’s copyright law, reinforcing its commitment to protecting local creators.

The decision follows calls from the technology sector for an exemption allowing AI developers to use copyrighted material without permission or payment.

Attorney-General Michelle Rowland said the Government aims to support innovation and creativity but will not weaken existing copyright protections. The Government plans to explore fair licensing options to support AI innovation while ensuring creators are paid fairly.

The Copyright and AI Reference Group will focus on fair AI use, more explicit copyright rules for AI works, and simpler enforcement through a possible small claims forum.

The Government said Australia must prepare for AI-related copyright challenges while keeping strong protections for creators. Collaboration between the technology and creative sectors will be essential to ensure that AI development benefits everyone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!