OpenAI has announced it will give copyright holders more control over how their intellectual property is used in videos produced by Sora 2. The shift comes amid criticism over Sora’s ability to generate scenes featuring popular characters and media, sometimes without permission.
At launch, Sora allowed generation under a default policy that required rights holders to opt out if they did not want their content used. That approach drew immediate backlash from studios and creators complaining about unauthorised use of copyrighted characters.
OpenAI now says it will introduce ‘more granular control’ for content owners, letting them set parameters for how their work can appear, or choose complete exclusion. The company has also hinted at monetisation features, such as revenue sharing for approved usage of copyrighted content.
CEO Sam Altman acknowledged that feedback from studios, artists and other stakeholders influenced the change. He emphasised that the new content policy would treat fictional characters more cautiously and make character generation opt-in rather than default.
Still unresolved is how precisely the system will work, especially around the enforcement, blocking, or filtering of unauthorised uses. OpenAI has repeatedly framed the updates as evolutionary, acknowledging that design and policy missteps may occur.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The update means that if you chat with Meta’s AI about a topic, such as hiking, the system may infer your interests and show related content, including posts from hiking groups or ads for boots. Meta emphasises that content and ad recommendations already use signals like likes, shares and follows, but the new change adds AI interactions as another signal.
Meta will notify users starting 7 October via in-app messages and emails to maintain user control. Users will retain access to settings such as Ads Preferences and feed controls to adjust what they see. Meta says it will not use sensitive AI chat content (religion, health, political beliefs, etc.) to personalise ads.
If users have linked those accounts in Meta’s Accounts Centre, interactions with AI on particular accounts will only be used for cross-account personalisation. Also, unless a WhatsApp account is added to the same Accounts Centre, AI interactions won’t influence experience in other apps.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China’s internet watchdog, the Cyberspace Administration of China (CAC), has warned online platforms Kuaishou Technology and Weibo for failing to curb celebrity gossip and harmful content on their platforms.
The CAC issued formal warnings, citing damage to the ‘online ecosystem’ and demanding corrective action. Both firms pledged compliance, with Kuaishou forming a task force and Weibo promising self-reflection.
The move follows similar disciplinary action against lifestyle app RedNote and is part of a broader two-month campaign targeting content that ‘viciously stimulates negative emotions.’
Separately, Kuaishou is under investigation by the State Administration for Market Regulation for alleged malpractice in live-streaming e-commerce.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Last week, we saw the TikTok saga unfold as the Chinese government has not agreed to sell the ByteDance daughter company to a US majority TikTok entity, so US President Donald Trump extended the deadline to find a non-Chinese buyer by another 75 days, pushing the cutoff to mid-June after a near-miss on 5 April.
Amid the tariff rise turmoil, President Donald Trump’s administration has granted exemptions from steep tariffs on smartphones, laptops, and other electronics, relieving tech giants like Apple and Dell.
The cryptocurrency landscape was waved by a blockchain analytics firm, which has alleged that the team behind the Melania Meme (MELANIA) cryptocurrency moved $30 million worth of tokens, allegedly taken from community reserves without explanation.
In the ever-evolving world of AI, two leading AI systems, OpenAI’s GPT-4.5 and Meta’s Llama-3.1, have passed a key milestone by outperforming humans in a modern version of the Turing Test.
On the cybersecurity stage, Oracle Health has reportedly suffered a data breach that compromised sensitive patient information stored by US hospitals.
The European Union has firmly ruled out dismantling its strict digital regulations in a bid to secure a trade deal with Donald Trump. Henna Virkkunen, the EU’s top official for digital policy, said the bloc remained fully committed to its digital rulebook instead of relaxing its standards to satisfy US demands.
Meta’s existence is threatened by a colossal antitrust trial which commenced in Washington, with the US Federal Trade Commission (FTC) arguing that the company’s acquisitions of Instagram in 2012 and WhatsApp in 2014 were designed to crush competition with monopoly aims instead of fostering innovation.
Elon Musk’s legal saga with OpenAI intensifies, as OpenAI has filed a countersuit accusing the billionaire entrepreneur of a sustained campaign of harassment intended to damage the company and regain control over its AI developments.
For the main updates and reflections, consult the Radar and Reading Corner below.
The 71% discount on Google Workspace is part of a cost-cutting initiative under President Trump’s government reform, targeting federal spending efficiency.
A discussion paper on crypto regulation in Japan highlights issues like market access, insider trading, and classification of assets into funding and non-funding categories.
As AI demand shifts, Microsoft has slowed down major data centre projects, including the one in Ohio, and plans to invest $80 billion in AI infrastructure this year.
With over 10,000 AI applications available, selecting the right AI tool can be daunting. Diplo advocates starting with a ‘good enough’ tool to avoid paralysis by analysis, tailoring it to specific needs through practical use.
International Geneva faces significant challenges, including financial constraints, waning multilateralism, and escalating geopolitical tensions. To remain relevant, it must embrace transformative changes, particularly through Artificial Intelligence (AI).
Founded by Bill Gates and Paul Allen in 1975, Microsoft grew from a small startup into the world’s largest software company. Through strategic acquisitions, the company expanded into diverse sectors,…
Do ideas have origins? From medieval communes to WWI, Aldo Matteucci shows how political thought, like a river, is shaped by experience, institutions, and historical context — not just theory.
Tech attache briefing: WSIS+20 and AI governance negotiations – Updates and next steps. The event is part of a series of regular briefings the Geneva
Internet Platform (GIP) is delivering for diplomats at permanent missions and delegations in Geneva following digital policy issues. It is an invitation-only event.
WIPO’s 11th Conversation on IP and AI will take place on April 23-24, 2025, focusing on the role of copyright infrastructure in supporting both rights holders and AI-driven innovation. As…
DW Weekly #206 - Impact of Trump's tariffs on tech industry 45
Dear readers,
Last week brought tectonic shifts in the global economy following the US tariffs dismantling the existing trade order. So far, the tech and digital sectors have been indirectly affected mainly by the price increase of hardware components, including semiconductors and servers, imported from China, Taiwan, and Vietnam. Apple and Samsung already announced a rise in the prices of smartphones.
However, the impact on the US tech sector and the global digital economy can worsen if the trade war escalates. In such a scenario, the European Union and other countries will likely introduce digital services taxes and stricter regulations of the US tech giants, as analysed by Jovan Kurbalija in Algorithms confront tariffs: A hidden digital front in an emerging trade war.
Forthcoming economic uncertainty may lead to a rise in the value of Bitcoin as a safer than other options for saving wealth and investment.
Amid geopolitical tensions, cybersecurity has risen in relevance. The UK and Japan passed new cybersecurity bills that protect critical infrastructure. As of 1 April, Switzerland requires critical infrastructure operators to report cyberattacks within 24 hours to the National Cybersecurity Center.
Microsoft is scaling down the development of new AI data centres as a sign of lower dynamism in this field and precautionary measures against AI bubble.
Coimisiún na Meán, leading DSA enforcement in Ireland, faces varying interpretations of the law among EU members, making a unified approach crucial to regulation.
US lawmakers are advancing stablecoin legislation aimed at increasing transparency, securing reserves, and strengthening the dollar’s role in digital payments.
The logical and analytical foundations laid by the Lwów–Warsaw School significantly support both the technical and ethical dimensions of AI transformation.
Key interested parties now include Amazon too, expressing its interest in line with its social media expansion ambitions, and a consortium led by OnlyFans founder Tim Stokely, proposing a model…
The Active Cyber Defence Bill would enable pre-emptive and active cyber measures by military and law enforcement, mandating incident reporting from critical infrastructure, and allowing limited data collection to monitor…
What started as a fun artistic trend has quickly turned into a technical nightmare for OpenAI, with its CEO pleading for a break as servers buckle under pressure.
President Trump’s tariffs on goods have intensified global trade tensions, notably with the EU. However, they largely ignore the critical sector of digital services, where the US holds a strong advantage. In response, European nations have proposed digital services taxes (DSTs) aimed at American tech giants, framing them as necessary for fiscal fairness. The collapse of OECD negotiations has prompted unilateral digital taxes across various countries, escalating the trade conflict. This shift towards digital taxation could redefine international trade diplomacy, posing challenges for US tech dominance and potentially leading to retaliation that affects both goods and digital markets.
The concept of digital sovereignty has gained prominence. This discussion examines the tension between territorial politics and transborder digital operations, highlighting how demands for autonomy reflect a desire to navigate external influences within an interconnected digital landscape. As sovereignty claims become entwined with security narratives, the necessity to socially anchor digital sovereignty policies is emphasized.
The Lwów–Warsaw School of Philosophy, a pioneering movement in Polish thought, has made lasting contributions to philosophy highly relevant to modern AI. The school’s work in logic and semantics provides essential tools for AI, while its analytical approach offers insights into ethical challenges.
An ermine plays peek-a-boo from a rotting tree. Cute? Maybe. But it might also be a calculated survival tactic. Aldo Matteucci explores provocation in the wild – and its unsettling parallel in human behaviour.
In this week’s edition, we untangle the clash of free speech, power, and platforms, during the mass protests in Türkiye, where many X accounts have been suspended, digging into what it means for global diplomacy, content policy, and the future of online speech.
IN FOCUS:Turkish protests – Freedom of speech has taken centre stage this week, with Türkiye’s streets erupting in mass protests and Elon Musk’s platform X again under fire. With account suspensions, government requests, and a tech giant caught between free expression and compliance, all eyes are on how X navigates this storm.
RADAR: UN General Assembly adopts resolution on WSIS+20 review modalities; Trump weighs tariff cuts to secure TikTok deal; EU softens AI copyright rules; SoftBank leads massive investment in OpenAI.
The recent suspension of many accounts on X (formerly Twitter) amid Türkiye’s civil unrest has provoked a complex debate surrounding freedom of speech and content moderation policies. Another case in the row shows the complex balance social media platforms must maintain between upholding free expression and adhering to governmental regulations, particularly in politically volatile environments.
THE CONTEXT: What’s happening in Türkiye?
The turmoil in Türkiye was sparked by the arrest of Istanbul’s mayor, Ekrem İmamoğlu, a potential candidate in the Turkish presidential election on behalf of an opposition party. Charged with alleged corruption and ties to terrorism, İmamoğlu’s detention led to widespread protests across major Turkish cities, including Istanbul, Ankara, and İzmir. Demonstrators viewed the arrest as a politically motivated attempt to sideline a key opposition figure ahead of the 2028 presidential elections. The government’s response was swift, resulting in over 1,100 arrests, including several journalists.
Amid the escalating protests, the Turkish Information and Communication Technologies Authority reportedly requested X to block more than 700 accounts, including those of news organisations, journalists, and political figures. These accounts primarily shared information about protest locations and organised demonstrations. Yusuf Can, the Wilson Center’s Middle East Program coordinator, noted that many suspended accounts were ‘university-associated activist accounts, basically sharing protest information, locations for students to go.’
However, X’s actions were inconsistent. While it allegedly suspended some accounts selectively, the platform publicly rejected the broader demand to block over 700 accounts, labelling the request as ‘illegal’ and asserting:
‘X will always defend freedom of speech everywhere we operate.’
Either way, the suspension of accounts during the Turkish protests raises critical questions about the responsibilities of social media platforms. While platforms like X operate globally, they must navigate a complex web of local laws and regulations. In Türkiye, laws mandate that social media companies appoint local representatives and comply with content removal requests under threat of fines or bandwidth reductions. This legal framework places platforms in a challenging position, balancing governmental compliance while trying to preserve user rights and freedom of expression.
To conclude:
The measures taken by X amid Türkiye’s protests underscore a constant challenge: content policy isn’t unchangeable; it’s a continuous legal dispute between big tech, national power and the voice of the people. Therefore, as social media platforms play an integral role in political discourse and activism, their content moderation policies and responses to governmental requests will remain under intense scrutiny. However, these common controversies demand transparent policies enabling companies to uphold the principles of free expression while curbing harmful content and being mindful of the complex landscape of content policies and political dynamics.
Find the full dig.watch analysis here or in our READING CORNER!
For more information on cybersecurity, digital policies, AI governance and other related topics, visit diplomacy.edu.
Legal experts are divided over whether the SEC’s lawsuit against Musk is justified or politically motivated.
RADAR:
UN General Assembly adopts resolution on WSIS+20 review modalities
On 25 March 2025, the UN General Assembly (UNGA) adopted the resolution defining the modalities for the overall review of the implementation of the outcomes of the World Summit on the Information Society (the WSIS+20 review).
Trump weighs tariff cuts to secure TikTok deal
US President Donald Trump has indicated he is willing to reduce tariffs on China as part of a deal with ByteDance, TikTok’s Chinese parent company, to sell the popular short-video app.
Amid European legal shifts, developers of general-purpose AI models are finding clearer ground, as the latest draft of the EU AI Act’s copyright guidelines embraces practicality and proportionate enforcement.
As ChatGPT’s features continue to capture the public’s imagination, OpenAI is close to sealing a colossal funding deal led by SoftBank that would double its valuation within months.
Financial authorities are split on crypto regulation, with the Central Bank pushing for a ban and the Ministry of Finance considering limited access for top investors.
Demand for Nvidia’s H20 chips is surging as Chinese tech giants, including Tencent and Alibaba, rush to adopt AI models, straining already limited supplies.
The V3 model from DeepSeek offers enhanced performance metrics and positions the Chinese startup as a growing rival to major AI players like OpenAI and Anthropic.
China rejected US accusations in the intelligence report, accusing Washington of using outdated Cold War thinking and hyping the ‘China threat’ to maintain military dominance.
Freedom of speech has taken centre stage this week, with Türkiye’s streets erupting in mass protests and Elon Musk’s platform X again under fire. With account suspensions, government requests, and a tech giant caught between free expression and compliance, all eyes are on how X navigates this storm.
Every March, yellow rain coats European cities, tinting cars and sidewalks with a golden hue. This striking phenomenon occurs when Saharan dust, carried by wind, travels thousands of kilometres and is washed down by rain. I learnt about it through conversations at the World Meteorological Organisation (WMO), where my office is located. In 2021, when 180,000 tonnes of dust swept across Europe, a webinar with Dr Slobodan Nickovic, creator of the ‘dust model’, deepened my understanding of this interplay between nature, science, and diplomacy, leading to reflections you can read in the original blog.
Only 2% of wild bees do 80% of the pollination. Should we still save the other 700 species? The debate is not just ecological – it’s moral vs economic.
No system works without standards – not cities, not cyberspace. As the metaverse grows, it needs rules that go beyond code. Read Part 6 of the new metaverse blog series: UN 2.0 and the Metaverse: Are We Seeing What Is Possible?
As AI’s energy demands surge, nuclear power is emerging as a key solution to sustain its growth while minimising carbon emissions. Tech giants like Microsoft, Google, and Amazon are investing heavily in nuclear energy to power AI-driven data centres, signalling a potential nuclear renaissance in the age of AI.
The Centre for the Fourth Industrial Revolution and the Ministry of ICT & Innovation, in collaboration with the World Economic Forum, will host the inaugural Global AI Summit on Africa…
Training for the Republic of Serbia Commissioner for Information of Public Importance and Personal Data Protection The representatives of the Commissioner for
Britain’s media regulator, Ofcom, has set a 31 March deadline for social media and online platforms to submit a risk assessment on the likelihood of users encountering illegal content. This move follows new laws passed last year requiring companies such as Meta’s Facebook and Instagram, as well as ByteDance’s TikTok, to take action against criminal activities on their platforms. Under the Online Safety Act, these firms must assess and address the risks of offences like terrorism, hate crimes, child sexual exploitation, and financial fraud.
The risk assessment must evaluate how likely it is for users to come across illegal content, or how user-to-user services could facilitate criminal activities. Ofcom has warned that failure to meet the deadline could result in enforcement actions against the companies. The new regulations aim to make online platforms safer and hold them accountable for the content shared on their sites.
The deadline is part of the UK‘s broader push to regulate online content and enhance user safety. Social media giants are now facing stricter scrutiny to ensure they are addressing potential risks associated with their platforms and protecting users from harmful content.
For more information on these topics, visit diplomacy.edu.
The Japanese Defence Ministry has unveiled its inaugural policy to promote AI use, aiming to adapt to technological advancements in defence operations. Focusing on seven key areas, including detection and identification of military targets, command and control, and logistic support, the policy aims to streamline the ministry’s work and respond to changes in technology-driven defence operations.
The new policy highlights that AI can enhance combat operation speed, reduce human error, and improve efficiency through automation. AI is also expected to aid in information gathering and analysis, unmanned defence assets, cybersecurity, and work efficiency. However, the policy acknowledges the limitations of AI, particularly in unprecedented situations, and concerns regarding its credibility and potential misuse.
The Defence Ministry plans to secure human resources with cyber expertise to address these issues, starting a specialised recruitment category in fiscal 2025. Defence Minister Minoru Kihara emphasised the importance of adapting to new forms of battle using AI and cyber technologies and stressed the need for cooperation with the private sector and international agencies.
Recognising the risks associated with AI use, Kihara highlighted the importance of accurately identifying and addressing these shortcomings. He stated that Japan’s ability to adapt to new forms of battle with AI and cyber technologies is a significant challenge in building up its defence capabilities. The ministry aims to deepen cooperation with the private sector and relevant foreign agencies by proactively sharing its views and strategies.
The Biden administration is pressing major technology companies to intensify their efforts to reduce antisemitic content on their platforms. Representatives from Alphabet, Meta, Microsoft, TikTok, and X met with US special envoy Deborah Lipstadt to discuss strategies for monitoring and combating antisemitism. Lipstadt emphasised the need for each company to assign a policy team member to address the issue, conduct specialised training to identify antisemitism, and publicly report trends in anti-Jewish content.
TikTok supported the meeting, highlighting their ongoing efforts and commitment to learning from experts. However, Alphabet, Microsoft, Meta, and X have yet to respond to requests for comment on the matter. The US administration is also calling for enhanced training to help platform staff recognise subtle antisemitic messages and distinguish between legitimate criticism of the Israeli government and hate speech directed at Jews.
The push from the administration comes amid a global increase in antisemitism following the 7 October attack by Hamas on southern Israel and the subsequent Israeli military response in Gaza. While the tech companies have not yet committed to voluntary measures, Lipstadt remains hopeful that they will take action soon to address this pressing issue.
A parliamentary committee in Canada is recommending that tech giants be held responsible for sharing false or misleading information online, mainly when foreign actors propagate it. However, the Conservatives in the committee did not back this call, saying this would endorse censorship online.
This is one of 22 recommendations by the House Ethics Committee, which studied foreign interference in Canada’s affairs, focusing on China and Russia. The committee’s report also calls for creating a foreign agent registry and improved measures to combat online misinformation.
The government has 60 days to respond to these recommendations, which have gained attention due to increasing concerns about foreign meddling in the country’s internal matters.
Why does it matter?
Concerns have been mounting in Western countries about campaigns orchestrated by foreign actors. In August, Meta, in collaboration with Australian research groups, dismantled the world’s largest covert Chinese spam network. This network was designed to target global users, promoting pro-China content while criticizing Western nations and their policies. A recent US intelligence report also revealed Russia’s extensive efforts to undermine public trust in global elections through espionage, state-controlled media, and manipulation of social media. In the era of digital information warfare, nations face the challenge of safeguarding their democratic processes and preserving public trust.