EU finalises AI code as 2025 compliance deadline approaches

The European Commission has released its finalised Code of Practice for general-purpose AI models, laying the groundwork for implementing the landmark AI Act. The new Code sets out transparency, copyright, and safety rules that developers must follow before deadlines.

Approved in March 2024 and effective from August, the AI Act introduces the EU’s first binding rules for AI. It bans high-risk applications such as real-time biometric surveillance, predictive policing, and emotion recognition in schools or workplaces.

Stricter obligations will apply to general-purpose models from August 2025, including mandatory documentation of training data, provided this does not violate intellectual property or trade secrets.

The Code of Practice, developed by experts with input from over 1,000 stakeholders, aims to guide AI providers through the AI Act’s requirements. It mandates model documentation, lawful content sourcing, risk management protocols, and a point of contact for copyright complaints.

However, industry voices, including the CCIA, have criticised the Code, saying it disproportionately burdens AI developers.

Member States and the European Commission will assess the effectiveness of the Code in the coming months. From August 2026, enforcement will begin for existing models, while new ones will be subject to the rules a year earlier.

The Commission says these steps are vital to ensure GPAI models are safe, transparent, and rights-respecting across the EU.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The rise and risks of synthetic media

Synthetic media transforms content creation across sectors

The rapid development of AI has enabled significant breakthroughs in synthetic media, opening up new opportunities in healthcare, education, entertainment and many more.

Instead of relying on traditional content creation, companies are now using advanced tools to produce immersive experiences, training simulations and personalised campaigns. But what exactly is synthetic media?

Seattle-based ElastixAI raised $16 million to build a platform that improves how large language models run, focusing on efficient inference rather than training.

Synthetic media refers to content produced partly or entirely by AI, including AI-generated images, music, video and speech. Tools such as ChatGPT, Midjourney and voice synthesisers are now widely used in both creative and commercial settings.

The global market for synthetic media is expanding rapidly. Valued at USD 4.5 billion in 2023, it is projected to reach USD 16.6 billion by 2033, driven mainly by tools that convert text into images, videos or synthetic speech.

The appeal lies in its scalability and flexibility: small teams can now quickly produce a wide range of professional-grade content and easily adapt it for multiple audiences or languages.

However, as synthetic media becomes more widespread, so do the ethical challenges it poses.

How deepfakes threaten trust and security

The same technology has raised serious concerns as deepfakes – highly realistic but fake audio, images and videos – become harder to detect and more frequently misused.

Deepfakes, a subset of synthetic media, go a step further by creating content that intentionally imitates real people in deceptive ways, often for manipulation or fraud.

The technology behind deepfakes involves face swapping through variational autoencoders and voice cloning via synthesised speech patterns. The entry barrier is low, making these tools accessible to the general public.

computer keyboard with red deepfake button key deepfake dangers online

First surfacing on Reddit in 2017, deepfakes have quickly expanded into healthcare, entertainment, and education, yet they also pose a serious threat when misused. For example, a major financial scam recently cost a company USD 25 million due to a deepfaked video call with a fake CFO.

Synthetic media fuels global political narratives

Politicians and supporters have often openly used generative AI to share satirical or exaggerated content, rather than attempting to disguise it as real.

In Indonesia, AI even brought back the likeness of former dictator Suharto to endorse candidates, while in India, meme culture thrived but failed to significantly influence voters’ decisions.

In the USA, figures like Elon Musk and Donald Trump have embraced AI-generated memes and voice parodies to mock opponents or improve their public image.

AI, US elections, Deepfakes

While these tools have made it easier to create misinformation, researchers such as UC Berkeley’s Hany Farid argue that the greater threat lies in the gradual erosion of trust, rather than a single viral deepfake.

It is becoming increasingly difficult for users to distinguish truth from fiction, leading to a contaminated information environment that harms public discourse. Legal concerns, public scrutiny, and the proliferation of ‘cheapfakes’—manipulated media that do not rely on AI—may have limited the worst predictions.

Nonetheless, experts warn that the use of AI in campaigns will continue to become more sophisticated. Without clear regulation and ethical safeguards, future elections may not be able to prevent the disruptive influence of synthetic media as easily.

Children use AI to create harmful deepfakes

School-aged children are increasingly using AI tools to generate explicit deepfake images of their classmates, often targeting girls. What began as a novelty has become a new form of digital sexual abuse.

With just a smartphone and a popular app, teenagers can now create and share highly realistic fake nudes, turning moments of celebration, like a bat mitzvah photo, into weapons of humiliation.

Rather than being treated as simple pranks, these acts have severe psychological consequences for victims and are leaving lawmakers scrambling.

Educators and parents are now calling for urgent action. Instead of just warning teens about criminal consequences, schools are starting to teach digital ethics, consent, and responsible use of technology.

kids using laptops in class

Programmes that explain the harm caused by deepfakes may offer a better path forward than punishment alone. Experts say the core issues—respect, agency, and safety—are not new.

The tools may be more advanced, but the message remains the same: technology must be used responsibly, not to exploit others.

Deepfakes become weapons of modern war

Deepfakes can also be deployed to sow confusion, falsify military orders, and manipulate public opinion. While not all such tactics will succeed, their growing use in psychological and propaganda operations cannot be ignored.

Intelligence agencies are already exploring how to integrate synthetic media into information warfare strategies, despite the risk of backfiring.

A new academic study from University College Cork examined how such videos spread on social media and how users reacted.

While many responded with scepticism and attempts at verification, others began accusing the real footage of being fake. The growing confusion risks creating an online environment where no information feels trustworthy, exactly the outcome hostile actors might seek.

While deception has long been part of warfare, deepfakes challenge the legal boundaries defined by international humanitarian law.

 Crowd, Person, Adult, Male, Man, Press Conference, Head, Face, People

Falsifying surrender orders to launch ambushes could qualify as perfidy—a war crime—while misleading enemies about troop positions may remain lawful.

Yet when civilians are caught in the crossfire of digital lies, violations of the Geneva Conventions become harder to ignore.

Regulation is lagging behind the technology, and without urgent action, deepfakes may become as destructive as conventional weapons, redefining both warfare and the concept of truth.

The good side of deepfake technology

Yet, not all applications are harmful. In medicine, deepfakes can aid therapy or generate synthetic ECG data for research while protecting patient privacy. In education, the technology can recreate historical figures or deliver immersive experiences.

Journalists and human rights activists also use synthetic avatars for anonymity in repressive environments. Meanwhile, in entertainment, deepfakes offer cost-effective ways to recreate actors or build virtual sets.

These examples highlight how the same technology that fuels disinformation can also be harnessed for innovation and the public good.

Governments push for deepfake transparency

However, the risks are rising. Misinformation, fraud, nonconsensual content, and identity theft are all becoming more common.

The danger of copyright infringement and data privacy violations also looms large, particularly when AI-generated material pulls content from social media or copyrighted works without permission.

Policymakers are taking action, but is it enough?

The USA has banned AI robocalls, and Europe’s AI Act aims to regulate synthetic content. Experts emphasise the need for worldwide cooperation, with regulation focusing on consent, accountability, and transparency.

eu artificial intelligence act 415652543

Embedding watermarks and enforcing civil liabilities are among the strategies being considered. To navigate the new landscape, a collaborative effort across governments, industry, and the public is crucial, not just to detect deepfakes but also to define their responsible use.

Some emerging detection methods include certifying content provenance, where creators or custodians attach verifiable information about the origin and authenticity of media.

Automated detection systems analyse inconsistencies in facial movements, speech patterns, or visual blending to identify manipulated media. Additionally, platform moderation based on account reputation and behaviour helps filter suspicious sources.

Systems that process or store personal data must also comply with privacy regulations, ensuring individuals’ rights to correct or erase inaccurate data.

Yet, despite these efforts, many of these systems still struggle to reliably distinguish synthetic content from real one.

As detection methods lag, some organisations like Reality Defender and Witness work to raise awareness and develop countermeasures.

The rise of AI influencers on social media

Another subset of synthetic media is the AI-generated influencers. AI (or synthetic) influencers are virtual personas powered by AI, designed to interact with followers, create content, and promote brands across social media platforms.

Unlike traditional influencers, they are not real people but computer-generated characters that simulate human behaviour and emotional responses. Developers use deep learning, natural language processing, and sophisticated graphic design to make these influencers appear lifelike and relatable.

Finfluencers face legal action over unregulated financial advice.

Once launched, they operate continuously, often in multiple languages and across different time zones, giving brands a global presence without the limitations of human engagement.

These virtual influencers offer several key advantages for brands. They can be precisely controlled to maintain consistent messaging and avoid the unpredictability that can come with human influencers.

Their scalability allows them to reach diverse markets with tailored content, and over time, they may prove more cost-efficient due to their ability to produce content at scale without the ongoing costs of human talent.

Brands can also experiment with creative storytelling in new and visually compelling ways that might be difficult for real-life creators.

Synthetic influencers have also begun appearing in the healthcare sector, although their widespread popularity in the sector remains limited. However, it is expected to grow rapidly.

Their rise also brings significant challenges. AI influencers lack genuine authenticity and emotional depth, which can hinder the formation of meaningful connections with audiences.

Their use raises ethical concerns around transparency, especially if followers are unaware that they are interacting with AI.

Data privacy is another concern, as these systems often rely on collecting and analysing large amounts of user information to function effectively.

Additionally, while they may save money in the long run, creating and maintaining a sophisticated AI influencer involves a substantial upfront investment.

Study warns of backlash from synthetic influencers

A new study from Northeastern University urges caution when using AI-powered influencers, despite their futuristic appeal and rising prominence.

While these digital figures may offer brands a modern edge, they risk inflicting greater harm on consumer trust compared to human influencers when problems arise.

The findings show that consumers are more inclined to hold the brand accountable if a virtual influencer promotes a faulty product or spreads misleading information.

Rather than viewing these AI personas as independent agents, users tend to see them as direct reflections of the company behind them. Instead of blaming the influencer, audiences shift responsibility to the brand itself.

Interestingly, while human influencers are more likely to be held personally liable, virtual influencers still cause deeper reputational damage.

 Accessories, Jewelry

People assume that their actions are fully scripted and approved by the business, making any error seem deliberate or embedded in company practices rather than a personal mistake.

Regardless of the circumstances, AI influencers are reshaping the marketing landscape by providing an innovative and highly adaptable tool for brands. While they are unlikely to replace human influencers entirely, they are expected to play a growing role in digital marketing.

Their continued rise will likely force regulators, brands, and developers to establish clearer ethical standards and guidelines to ensure responsible and transparent use.

Shaping the future of synthetic media

In conclusion, the growing presence of synthetic media invites both excitement and reflection. As researchers, policymakers, and creators grapple with its implications, the challenge lies not in halting progress but in shaping it thoughtfully.

All forms of synthetic media, like any other form of technology, have a dual capacity to empower and exploit, demanding a new digital literacy — one that prioritises critical engagement, ethical responsibility, and cross-sector collaboration.

On the one hand, deepfakes threaten democratic stability, information integrity, and civilian safety, blurring the line between truth and fabrication in conflict, politics, and public discourse.

On the other hand, AI influencers are transforming marketing and entertainment by offering scalable, controllable, and hyper-curated personas that challenge notions of authenticity and human connection.

Rather than fearing the tools themselves, we as human beings need to focus on cultivating the norms and safeguards that determine how, and for whom, they are used. Ultimately, these tools are meant to enhance our way of life, not undermine it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI locks down operations after DeepSeek model concerns

OpenAI has significantly tightened its internal security following reports that DeepSeek may have replicated its models. DeepSeek allegedly used distillation techniques to launch a competing product earlier this year, prompting a swift response.

OpenAI has introduced strict access protocols to prevent information leaks, including fingerprint scans, offline servers, and a policy restricting internet use without approval. Sensitive projects such as its AI o1 model are now discussed only by approved staff within designated areas.

The company has also boosted cybersecurity staffing and reinforced its data centre defences. Confidential development information is now shielded through ‘information tenting’.

These actions coincide with OpenAI’s $30 billion deal with Oracle to lease 4.5 gigawatts of data centre capacity across the United States. The partnership plays a central role in OpenAI’s growing Stargate infrastructure strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Court ruling raises alarm over saved ChatGPT chats

A US federal court has ordered OpenAI to preserve nearly all user chats with ChatGPT, including those that users had deleted. The decision comes as part of The New York Times’s ongoing copyright lawsuit, triggering widespread privacy concerns.

The ruling means that millions of personal conversations, previously thought erased, will remain accessible during litigation. These exchanges may include medical queries, relationship issues, and other private matters shared in confidence.

Privacy advocates argue that users were not notified or allowed to object. Critics warn the US ruling sets a dangerous precedent, enabling mass data preservation in lawsuits unrelated to most users.

The Times claims users may have deleted chats to hide copyright infringement. Lawyers and privacy experts counter that people delete chats for legitimate, non-infringing reasons and should retain control over their data.

Legal experts call the preservation order excessive, noting it undermines trust in AI tools and could lead to a chilling effect on their use. The decision could reshape how user privacy is treated in tech litigation for years.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google hit with EU complaint over AI Overviews

After a formal filing by the Independent Publishers Alliance, Google has faced an antitrust complaint in the European Union over its AI Overviews feature.

The group alleges that Google has been using web content without proper consent to power its AI-generated summaries, causing considerable harm to online publishers.

The complaint claims that publishers have lost traffic, readers and advertising revenue due to these summaries. It also argues that opting out of AI Overviews is not a real choice unless publishers are prepared to vanish entirely from Google’s search results.

AI Overviews were launched over a year ago and now appear at the top of many search queries, summarising information using AI. Although the tool has expanded rapidly, critics argue it drives users away from original publisher websites, especially news outlets.

Google has responded by stating its AI search tools allow users to ask more complex questions and help businesses and creators get discovered. The tech giant also insisted that web traffic patterns are influenced by many factors and warned against conclusions based on limited data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare’s new tool lets publishers charge AI crawlers

Cloudflare, which powers 20% of the web, has launched a new marketplace called Pay per Crawl, aiming to redefine how website owners interact with AI companies.

The platform allows publishers to set a price for AI crawlers to access their content instead of allowing unrestricted scraping or blocking. Website owners can decide to charge a micropayment for each crawl, permit free access, or block crawlers altogether, gaining more control over their material.

Over the past year, Cloudflare introduced tools for publishers to monitor and block AI crawlers, laying the groundwork for the marketplace. Major publishers like Conde Nast, TIME and The Associated Press have joined Cloudflare in blocking AI crawlers by default, supporting a permission-based approach.

The company also now blocks AI bots by default on all new sites, requiring site owners to grant access.

Cloudflare’s data reveals that AI crawlers scrape websites far more aggressively than traditional search engines, often without sending equivalent referral traffic. For example, OpenAI’s crawler scraped sites 1,700 times for every referral, compared to Google’s 14 times.

As AI agents evolve to gather and deliver information directly, it raises challenges for publishers who rely on site visits for revenue.

Pay per Crawl could offer a new business model for publishers in an AI-driven world. Cloudflare envisions a future where AI agents operate with a budget to access quality content programmatically, helping users synthesise information from trusted sources.

For now, both publishers and AI companies need Cloudflare accounts to set crawl rates, with Cloudflare managing payments. The company is also exploring stablecoins as a possible payment method in the future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI rock band’s Spotify rise fuels calls for transparency

A mysterious indie rock band called The Velvet Sundown has shot to popularity on Spotify, and may be powered by AI. Their debut track, Dust on the Wind, has racked up over 380,000 plays since 20 June and helped attract more than 470,000 monthly listeners.

The song bears a resemblance to the 1977 Kansas hit Dust in the Wind, prompting suspicion from Reddit users. The band’s profile picture and Instagram photos appear AI-generated, while the band members listed — such as ‘Milo Rains’ and ‘Rio Del Mar’ — have no online trace.

Despite the clues, Spotify does not label the group as AI-generated. Their songs are appearing in curated playlists like Discover Weekly. Only Deezer, a French streaming service, has identified The Velvet Sundown as likely created by generative AI models like Suno or Udio.

Deezer began tagging AI music in June and now detects over 20,000 entirely artificial tracks each day. Another AI band, The Devil Inside, has also gained traction. Their song Bones in the River has over 1.6 million plays on Spotify, but lacks credited creators.

On Deezer, the same track is labelled as AI-generated and linked to Hungarian musician László Tamási — a rare human credit for bot-made music. While Deezer takes a transparent approach, Spotify, Apple Music, and Amazon Music have not announced detection systems or labelling plans.

Deezer CEO Alexis Lanternier said AI is ‘not inherently good or bad,’ but called for transparency to protect artist rights and user trust. Legal battles are already underway. US record labels have sued Suno and Udio for mass copyright infringement, though the companies argue it falls under fair use.

As AI-generated music continues to rise, platforms face increasing pressure to inform users and draw more precise lines between human and machine-made art.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark proposes landmark law to protect citizens from deepfake misuse

Denmark’s Ministry of Culture has introduced a draft law aimed at safeguarding citizens’ images and voices under national copyright legislation, Azernews reports. The move marks a significant step in addressing the misuse of deepfake technologies.

The proposed bill prohibits using an individual’s likeness or voice without prior consent, enabling affected individuals to claim compensation. While satire and parody remain exempt, the legislation explicitly bans the unauthorised use of deepfakes in artistic performances.

Under the proposed framework, online platforms that fail to remove deepfake content upon request could be subject to fines. The legislation will apply only within Denmark and is expected to pass with up to 90% parliamentary support.

The bill follows recent incidents involving manipulated videos of Denmark’s Prime Minister and legal challenges against the creators of pornographic deepfakes.

If adopted, Denmark would become the first country in the region to implement such legal measures. The proposal is expected to spark broader discussions across Europe on the ethical boundaries of AI-generated content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI training with pirated books triggers massive legal risk

A US court has ruled that AI company Anthropic engaged in copyright infringement by downloading millions of pirated books to train its language model, Claude.

Although the court found that using copyrighted material for AI training could qualify as ‘fair use’ under US law when the content is transformed, it also held that acquiring the content illegally instead of licensing it lawfully constituted theft.

Judge William Alsup described AI as one of the most transformative technologies of our time. Still, he stated that Anthropic obtained millions of digital books from pirate sites such as LibGen and Pirate Library Mirror.

He noted that buying the same books later in print form does not erase the initial violation, though it may reduce potential damages.

The penalties for wilful copyright infringement in the US could reach up to $150,000 per work, meaning total compensation might run into the billions.

The case highlights the fine line between transformation and theft and signals growing legal pressure on AI firms to respect intellectual property instead of bypassing established licensing frameworks.

Australia, which uses a ‘fair dealing’ system rather than ‘fair use’, already offers flexible licensing schemes through organisations like the Copyright Agency.

CEO Josephine Johnston urged policymakers not to weaken Australia’s legal framework in favour of global tech companies, arguing that licensing provides certainty for developers and fair payment to content creators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta wins copyright case over AI training

Meta has won a copyright lawsuit brought by a group of authors who accused the company of using their books without permission to train its Llama generative AI.

A US federal judge in San Francisco ruled the AI training was ‘transformative’ enough to qualify as fair use under copyright law.

Judge Vince Chhabria noted, however, that future claims could be more successful. He warned that using copyrighted books to build tools capable of flooding the market with competing works may not always be protected by fair use, especially when such tools generate vast profits.

The case involved pirated copies of books, including Sarah Silverman’s memoir ‘The Bedwetter’ and Junot Diaz’s award-winning novel ‘The Brief Wondrous Life of Oscar Wao’. Meta defended its approach, stating that open-source AI drives innovation and relies on fair use as a key legal principle.

Chhabria clarified that the ruling does not confirm the legality of Meta’s actions, only that the plaintiffs made weak arguments. He suggested that more substantial evidence and legal framing might lead to a different outcome in future cases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!