Celebrity estates push back on Sora as app surges to No.1

OpenAI’s short-video app Sora topped one million downloads in under a week, then ran headlong into a likeness-rights firestorm. Celebrity families and studios demanded stricter controls. Estates for figures like Martin Luther King Jr. sought blocks on unauthorised cameos.

Users showcased hyperreal mashups that blurred satire and deception, from cartoon crossovers to dead celebrities in improbable scenes. All clips are AI-made, yet reposting across platforms spread confusion. Viewers faced a constant real-or-fake dilemma.

Rights holders pressed for consent, compensation, and veto power over characters and personas. OpenAI shifted toward opt-in for copyrighted properties and enabled estate requests to restrict cameos. Policy language on who qualifies as a public figure remains fuzzy.

Agencies and unions amplified pressure, warning of exploitation and reputational risks. Detection firms reported a surge in takedown requests for unauthorised impersonations. Watermarks exist, but removal tools undercut provenance and complicate enforcement.

Researchers warned about a growing fog of doubt as realistic fakes multiply. Every day, people are placed in deceptive scenarios, while bad actors exploit deniability. OpenAI promised stronger guardrails as Sora scales within tighter rules.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MLK estate pushback prompts new Sora 2 guardrails at OpenAI

OpenAI paused the ability to re-create Martin Luther King Jr. in Sora 2 after Bernice King objected to user videos. Company leaders issued a joint statement with the King estate. New guardrails will govern depictions of historical figures on the app.

OpenAI said families and authorised estates should control how likenesses appear. Representatives can request removal or opt-outs. Free speech was acknowledged, but respectful use and consent were emphasised.

Policy scope remains unsettled, including who counts as a public figure. Case-by-case requests may dominate early enforcement. Transparency commitments arrived without full definitions or timelines.

Industry pressure intensified as major talent agencies opted out of clients. CAA and UTA cited exploitation and legal exposure. Some creators welcomed the tool, showing a split among public figures.

User appetite for realistic cameos continues to test boundaries. Rights of publicity and postmortem controls vary by state. OpenAI promised stronger safeguards while Sora 2 evolves.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Diella 2.0 set to deliver 83 new AI assistants to aid Albania’s MPs

Albania’s AI minister Diella will ‘give birth’ to 83 virtual assistants for ruling-party MPs, Prime Minister Edi Rama said, framing a quirky rollout of parliamentary copilots that record debates and propose responses.

Diella began in January as a public-service chatbot on e-Albania, then ‘Diella 2.0’ added voice and an avatar in traditional dress. Built with Microsoft by the National Agency for Information Society, it now oversees specific state tech contracts.

The legality is murky: the constitution of Albania requires ministers to be natural persons. A presidential decree left Rama’s responsibility to establish the role and set up likely court tests from opposition lawmakers.

Rama says the ‘children’ will brief MPs, summarise absences, and suggest counterarguments through 2026, experimenting with automating the day-to-day legislative grind without replacing elected officials.

Reactions range from table-thumping scepticism to cautious curiosity, as other governments debate AI personhood and limits; Diella could become a template, or a cautionary tale for ‘ministerial’ bots.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU investigates Meta and TikTok for DSA breaches

The European Commission has accused Meta and TikTok of breaching the Digital Services Act (DSA), highlighting failures in handling illegal content and providing researchers access to public data.

Meta’s Facebook and Instagram were found to make it too difficult for users to report illegal content or receive responses to complaints, the Commission said in its preliminary findings.

Investigations began after complaints to Ireland’s content regulator, where Meta’s EU base is located. The Commission’s inquiry, which has been ongoing since last year, aims to ensure that large platforms protect users and meet EU safety obligations.

Meta and TikTok can submit counterarguments before penalties of up to six percent of global annual turnover are imposed.

Both companies face separate concerns about denying researchers adequate access to platform data and preventing oversight of systemic online risks. TikTok is under further examination for minor protection and advertising transparency issues.

The Commission has launched 14 such DSA-related proceedings, none concluded.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia demands answers from AI chatbot providers over child safety

Australia’s eSafety Commissioner has issued legal notices to four major AI companion platforms, requiring them to explain how they are protecting children from harmful or explicit content.

Character.ai, Nomi, Chai, and Chub.ai were all served under the country’s Online Safety Act and must demonstrate compliance with Australia’s Basic Online Safety Expectations.

The notices follow growing concern that AI companions, designed for friendship and emotional support, can expose minors to sexualised conversations, suicidal ideation, and other psychological risks.

eSafety Commissioner Julie Inman Grant said the companies must show how their systems prevent such harms, not merely react to them, warning that failure to comply could lead to penalties of up to $825,000 per day.

AI companion chatbots have surged in popularity among young users, with Character.ai alone attracting nearly 160,000 monthly active users in Australia.

The Commissioner stressed that these services must integrate safety measures by design, as new enforceable codes now extend to AI platforms that previously operated with minimal oversight.

A move that comes amid wider efforts to regulate emerging AI technologies and ensure stronger child protection standards online.

Breaches of the new codes could result in civil penalties of up to $49.5 million, marking one of the toughest online safety enforcement regimes globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Large language models mimic human object perception

Recent research shows that large multimodal language models (LLMs) can develop object representations strikingly similar to human cognition. By analysing how these AI models understand and organise concepts, scientists found patterns in the models that mirror neural activity in the human brain.

The study examined embeddings for 1,854 natural objects, derived from millions of text-image pairings. These embeddings capture relationships between objects and were compared with brain scan data from regions like EBA, PPA, RSC and FFA.

Researchers also discovered that multimodal training, which combines text and image data, enhances model’s ability to form these human-like concepts. Findings suggest that large language models can achieve more natural understanding of the world, offering potential improvements in human-AI interaction and future model design.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU pushes harder on basic digital skills for growth

Nearly half of EU adults lack basic digital skills, yet most jobs demand them. Eurostat reports only 56% have at least basic proficiency. EU Code Week spotlights the urgency for digital literacy and inclusion.

The Digital Education Action Plan aims to modernise curricula, improve infrastructure, and train teachers. EU policymakers target 80% of adults with basic skills by 2030. Midway progress suggests stronger national action is still required.

Progress remains uneven across regions, with rural connectivity still lagging in places. Belgium began a school smartphone ban across Flanders from 1 September to curb distractions. Educators now balance classroom technology with attention and safety.

Brussels proposed a Union of Skills strategy to align education and competitiveness. The EU also earmarked fresh funding for AI, cybersecurity, and digital skills. Families and schools are urged to develop unplugged problem-solving alongside classroom learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT faces EU’s toughest platform rules after 120 million users

OpenAI’s ChatGPT could soon face the EU’s strictest platform regulations under the Digital Services Act (DSA), after surpassing 120 million monthly users in Europe.

A milestone that places OpenAI’s chatbot above the 45 million-user threshold that triggers heightened oversight.

The DSA imposes stricter obligations on major platforms such as Meta, TikTok, and Amazon, requiring greater transparency, risk assessments, and annual fees to fund EU supervision.

The European Commission confirmed it has begun assessing ChatGPT’s eligibility for the ‘very large online platform’ status, which would bring the total number of regulated platforms to 26.

OpenAI reported that its ChatGPT search function alone had 120.4 million monthly active users across the EU in the six months ending 30 September 2025. Globally, the chatbot now counts around 700 million weekly users.

If designated under the DSA, ChatGPT would be required to curb illegal and harmful content more rigorously and demonstrate how its algorithms handle information, marking the EU’s most direct regulatory test yet for generative AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU states split over children’s social media rules

European leaders remain divided over how to restrict children’s use of social media platforms. While most governments agree stronger protections are needed, there is no consensus on enforcement or age limits.

Twenty-five EU countries, joined by Norway and Iceland, recently signed a declaration supporting tougher child protection rules online. The plan calls for a digital age of majority, potentially restricting under-15s or under-16s from joining social platforms.

France and Denmark back full bans for children below 15, while others, prefer verified parental consent. Some nations argue parents should retain primary responsibility, with the state setting only basic safeguards.

Brussels faces pressure to propose EU-wide legislation, but several capitals insist decisions should stay national. Estonia and Belgium declined to sign the declaration, warning that new bans risk overreach and calling instead for digital education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Suzanne Somers lives on in an AI twin

Alan Hamel says he’s moving ahead with a ‘Suzanne AI Twin’ to honor Suzanne Somers’ legacy. The project mirrors plans the couple discussed for decades. He shared an early demo at a recent conference.

Hamel describes the prototype as startlingly lifelike. He says side-by-side, he can’t tell real from AI. The goal is to preserve Suzanne’s voice, look, and mannerisms.

Planned uses include archival storytelling, fan Q&As, and curated appearances. The team is training the model on interviews, performances, and writings. Rights and guardrails are being built in.

Supporters see a new form of remembrance. Critics warn of deepfake risks and consent boundaries. Hamel says fidelity and respect are non-negotiable.

Next steps include wider testing and a controlled public debut. Proceeds could fund causes Suzanne championed. ‘It felt like talking to her,’ Hamel says.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!