Tech firms are racing to integrate AI into social media, reshaping online interaction while raising fresh concerns over privacy, misinformation, and copyright. Platforms like OpenAI’s Sora and Meta’s Vibes are at the centre of the push, blending generative AI tools with short-form video features similar to TikTok.
OpenAI’s Sora allows users to create lifelike videos from text prompts, but film studios say copyrighted material is appearing without permission. OpenAI has promised tighter controls and a revenue-sharing model for rights holders, while Meta has introduced invisible watermarks to identify AI content.
Safety concerns are mounting as well. Lawsuits allege that AI chatbots such as Character.AI have contributed to mental health issues among teenagers. OpenAI and Meta have added stronger restrictions for young users, including limits on mature content and tighter communication controls for minors.
Critics question whether users truly want AI-generated content dominating their feeds, describing the influx as overwhelming and confusing. Yet industry analysts say the shift could define the next era of social media, as companies compete to turn AI creativity into engagement and profit.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Musk said Grok will analyse bitstreams for AI signatures and scan the web to verify the origins of videos. Grok added that it will detect subtle AI artefacts in compression and generation patterns that humans cannot see.
AI tools such as Grok Imagine and Sora are reshaping the internet by making realistic video generation accessible to anyone. The rise of deepfakes has alarmed users, who warn that high-quality fake videos could soon be indistinguishable from real footage.
A user on X expressed concern that leaders are not addressing the growing risks. Elon Musk responded, revealing that his AI company xAI is developing Grok’s ability to detect AI-generated videos and trace their origins online.
@grok will be able to analyze the video for AI signatures in the bitstream and then further research the Internet to assess origin
The detection features aim to rebuild trust in digital media as AI-generated content spreads. Commentators have dubbed the flood of such content ‘AI slop’, raising concerns about misinformation and consent.
Concerns about deepfakes have grown since OpenAI launched the Sora app. A surge in deepfake content prompted OpenAI to tighten restrictions on cameo mode, allowing users to opt out of specific scenarios.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has announced it will give copyright holders more control over how their intellectual property is used in videos produced by Sora 2. The shift comes amid criticism over Sora’s ability to generate scenes featuring popular characters and media, sometimes without permission.
At launch, Sora allowed generation under a default policy that required rights holders to opt out if they did not want their content used. That approach drew immediate backlash from studios and creators complaining about unauthorised use of copyrighted characters.
OpenAI now says it will introduce ‘more granular control’ for content owners, letting them set parameters for how their work can appear, or choose complete exclusion. The company has also hinted at monetisation features, such as revenue sharing for approved usage of copyrighted content.
CEO Sam Altman acknowledged that feedback from studios, artists and other stakeholders influenced the change. He emphasised that the new content policy would treat fictional characters more cautiously and make character generation opt-in rather than default.
Still unresolved is how precisely the system will work, especially around the enforcement, blocking, or filtering of unauthorised uses. OpenAI has repeatedly framed the updates as evolutionary, acknowledging that design and policy missteps may occur.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Zelda Williams has urged people to stop sending her AI-generated videos of her late father, Robin Williams, calling the practice disturbing and disrespectful. The actor and director said the videos are exploitative and misrepresent what her father would have wanted.
In her post, she said such recreations are ‘dumb’ and a ‘waste of time and energy’, adding that turning human legacies into digital imitations is ‘gross’. She criticised those using AI to simulate deceased performers for online engagement, describing the results as emotionless and detached.
The discussion intensified after the unveiling of ‘AI actor’ Tilly Norwood, created by Dutch performer Eline Van der Velden. Unions and stars such as Emily Blunt condemned the concept, warning that AI-generated characters risk eroding human creativity and emotional authenticity.
Williams previously supported SAG-AFTRA’s campaign against the misuse of AI in entertainment, calling digital recreations of her father’s voice ‘personally disturbing’. She has continued to call for respect for real artists and their legacies.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The video game company, Nintendo, has denied reports that it lobbied the Japanese government over the use of generative AI. The company issued an official statement on its Japanese X account, clarifying that it has had no contact with authorities.
However, this rumour originated from a post by Satoshi Asano, a member of Japan’s House of Representatives, who suggested that private companies had pressed the government on intellectual property protection concerning AI.
After Nintendo’s statement, Asano retracted his remarks and apologised for spreading misinformation.
Nintendo stressed that it would continue to protect its intellectual property against infringement, whether AI was involved or not. The company reaffirmed its cautious approach toward generative AI in game development, focusing on safeguarding creative rights rather than political lobbying.
The episode underscores the sensitivity around AI in the creative industries of Japan, where concerns about copyright and technological disruption are fuelling debate. Nintendo’s swift clarification signals how seriously it takes misinformation and protects its brand.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Bombay High Court has granted ad-interim relief to Asha Bhosle, barring AI platforms and sellers from cloning her voice or likeness without consent. The 90-year-old playback singer, whose career spans eight decades, approached the court to protect her identity from unauthorised commercial use.
Bhosle filed the suit after discovering platforms offering AI-generated voice clones mimicking her singing. Her plea argued that such misuse damages her reputation and goodwill. Justice Arif S. Doctor found a strong prima facie case and stated that such actions would cause irreparable harm.
The order restrains defendants, including US-based Mayk Inc, from using machine learning, face-morphing, or generative AI to imitate her voice or likeness. Google, also named in the case, has agreed to take down specific URLs identified by Bhosle’s team.
Defendants are required to share subscriber information, IP logs, and payment details to assist in identifying infringers. The court emphasised that cloning the voices of cultural icons risks misleading the public and infringing on individuals’ rights to their identity.
The ruling builds on recent cases in India affirming personality rights and sets an important precedent in the age of generative AI. The matter is scheduled to return to court on 13 October 2025.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has launched Sora 2.0, the latest version of its video generation model, alongside an iOS app available by invitation in the US and Canada. The tool offers advances in physical realism, audio-video synchronisation, and multi-shot storytelling, with built-in safeguards for security and identity control.
The app allows users to create, remix, or appear in clips generated from text or images. A Pro version, web interface, and developer API are expected soon, extending access to the model.
Sora 2.0 has reignited debate over intellectual property. According to The Wall Street Journal, OpenAI has informed studios and talent agencies that their universes could appear in generated clips unless they opt out.
The company defends its approach as an extension of fan creativity, while stressing that real people’s images and voices require prior consent, validated through a verified cameo system.
By combining new creative tools with identity safeguards, OpenAI aims to position Sora 2.0 as a leading platform in the fast-growing market for AI-generated video.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk’s xAI has sued OpenAI, alleging a coordinated and unlawful campaign to steal its proprietary technology. The complaint alleges OpenAI targeted former xAI staff to steal source code, training methods, and data centre strategies.
The lawsuit claims OpenAI recruiter Tifa Chen offered large packages to engineers who then allegedly uploaded xAI’s source code to personal devices. Notable incidents include Xuechen Li confessing to code theft and Jimmy Fraiture allegedly transferring confidential files via AirDrop repeatedly.
Legal experts note the case centres on employee poaching and the definition of xAI’s ‘secret sauce,’ including GPU racking, vendor contracts, and operational playbooks.
Liability may depend on whether OpenAI knowingly directed recruiters, while the company could defend itself by showing independent creation with time-stamped records.
xAI is seeking damages, restitution, and injunctions requiring OpenAI to remove its materials and destroy models built using them. The lawsuit is Musk’s latest legal action against OpenAI, following a recent antitrust case with Apple over alleged market dominance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Spotify announced new measures to address AI risks in music, aiming to protect artists’ identities and preserve trust on the platform. The company said AI can boost creativity but also enable harmful content like impersonations and spam that exploit artists and cut into royalties.
A new impersonation policy has been introduced, clarifying that AI-generated vocal clones of artists are only permitted with explicit authorisation. Spotify is strengthening processes to block fraudulent uploads and mismatches, giving artists quicker recourse when their work is misused.
The platform will launch a new spam filter this year to detect and curb manipulative practices like mass uploads and artificially short tracks. The system will be deployed cautiously, with updates added as new abuse tactics emerge, in order to safeguard legitimate creators.
In addition, Spotify will back an industry standard for AI disclosures in music credits, allowing artists and rights holders to show how AI was used in production. The company said these steps show its commitment to protecting artists, ensuring transparency, and fair royalties as AI reshapes the music industry.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Union and Indonesia have concluded negotiations on a Comprehensive Economic Partnership Agreement (CEPA) and an Investment Protection Agreement (IPA), strongly emphasising technology, digitalisation and sustainable industries.
The agreements are designed to expand trade, secure critical raw materials, and drive the green and digital transitions.
Under the CEPA, tariffs on 98.5% of trade lines will be removed, cutting costs by €600 million annually and giving EU companies greater access to Indonesia’s fast-growing technology sectors, including electric vehicles, electronics and pharmaceuticals.
European firms will also gain full ownership rights in key service areas such as computers and telecommunications, helping deepen integration of digital supply chains.
A deal that embeds commitments to the Paris Agreement while promoting renewable energy and low-carbon technologies. It also includes cooperation on digital standards, intellectual property protections and trade facilitation for sectors vital to Europe’s clean tech and digital industries.
With Indonesia as a leading producer of critical raw materials, the agreement secures sustainable and predictable access to inputs essential for semiconductors, batteries and other strategic technologies.
Launched in 2016, the negotiations concluded after the political agreement reached in July 2025 between Presidents Ursula von der Leyen and Prabowo Subianto. The texts will undergo legal review before the EU and Indonesia ratification, opening a new chapter in tech-enabled trade and innovation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!