The rise and risks of synthetic media

Synthetic media transforms content creation across sectors

The rapid development of AI has enabled significant breakthroughs in synthetic media, opening up new opportunities in healthcare, education, entertainment and many more.

Instead of relying on traditional content creation, companies are now using advanced tools to produce immersive experiences, training simulations and personalised campaigns. But what exactly is synthetic media?

Seattle-based ElastixAI raised $16 million to build a platform that improves how large language models run, focusing on efficient inference rather than training.

Synthetic media refers to content produced partly or entirely by AI, including AI-generated images, music, video and speech. Tools such as ChatGPT, Midjourney and voice synthesisers are now widely used in both creative and commercial settings.

The global market for synthetic media is expanding rapidly. Valued at USD 4.5 billion in 2023, it is projected to reach USD 16.6 billion by 2033, driven mainly by tools that convert text into images, videos or synthetic speech.

The appeal lies in its scalability and flexibility: small teams can now quickly produce a wide range of professional-grade content and easily adapt it for multiple audiences or languages.

However, as synthetic media becomes more widespread, so do the ethical challenges it poses.

How deepfakes threaten trust and security

The same technology has raised serious concerns as deepfakes – highly realistic but fake audio, images and videos – become harder to detect and more frequently misused.

Deepfakes, a subset of synthetic media, go a step further by creating content that intentionally imitates real people in deceptive ways, often for manipulation or fraud.

The technology behind deepfakes involves face swapping through variational autoencoders and voice cloning via synthesised speech patterns. The entry barrier is low, making these tools accessible to the general public.

computer keyboard with red deepfake button key deepfake dangers online

First surfacing on Reddit in 2017, deepfakes have quickly expanded into healthcare, entertainment, and education, yet they also pose a serious threat when misused. For example, a major financial scam recently cost a company USD 25 million due to a deepfaked video call with a fake CFO.

Synthetic media fuels global political narratives

Politicians and supporters have often openly used generative AI to share satirical or exaggerated content, rather than attempting to disguise it as real.

In Indonesia, AI even brought back the likeness of former dictator Suharto to endorse candidates, while in India, meme culture thrived but failed to significantly influence voters’ decisions.

In the USA, figures like Elon Musk and Donald Trump have embraced AI-generated memes and voice parodies to mock opponents or improve their public image.

AI, US elections, Deepfakes

While these tools have made it easier to create misinformation, researchers such as UC Berkeley’s Hany Farid argue that the greater threat lies in the gradual erosion of trust, rather than a single viral deepfake.

It is becoming increasingly difficult for users to distinguish truth from fiction, leading to a contaminated information environment that harms public discourse. Legal concerns, public scrutiny, and the proliferation of ‘cheapfakes’—manipulated media that do not rely on AI—may have limited the worst predictions.

Nonetheless, experts warn that the use of AI in campaigns will continue to become more sophisticated. Without clear regulation and ethical safeguards, future elections may not be able to prevent the disruptive influence of synthetic media as easily.

Children use AI to create harmful deepfakes

School-aged children are increasingly using AI tools to generate explicit deepfake images of their classmates, often targeting girls. What began as a novelty has become a new form of digital sexual abuse.

With just a smartphone and a popular app, teenagers can now create and share highly realistic fake nudes, turning moments of celebration, like a bat mitzvah photo, into weapons of humiliation.

Rather than being treated as simple pranks, these acts have severe psychological consequences for victims and are leaving lawmakers scrambling.

Educators and parents are now calling for urgent action. Instead of just warning teens about criminal consequences, schools are starting to teach digital ethics, consent, and responsible use of technology.

kids using laptops in class

Programmes that explain the harm caused by deepfakes may offer a better path forward than punishment alone. Experts say the core issues—respect, agency, and safety—are not new.

The tools may be more advanced, but the message remains the same: technology must be used responsibly, not to exploit others.

Deepfakes become weapons of modern war

Deepfakes can also be deployed to sow confusion, falsify military orders, and manipulate public opinion. While not all such tactics will succeed, their growing use in psychological and propaganda operations cannot be ignored.

Intelligence agencies are already exploring how to integrate synthetic media into information warfare strategies, despite the risk of backfiring.

A new academic study from University College Cork examined how such videos spread on social media and how users reacted.

While many responded with scepticism and attempts at verification, others began accusing the real footage of being fake. The growing confusion risks creating an online environment where no information feels trustworthy, exactly the outcome hostile actors might seek.

While deception has long been part of warfare, deepfakes challenge the legal boundaries defined by international humanitarian law.

 Crowd, Person, Adult, Male, Man, Press Conference, Head, Face, People

Falsifying surrender orders to launch ambushes could qualify as perfidy—a war crime—while misleading enemies about troop positions may remain lawful.

Yet when civilians are caught in the crossfire of digital lies, violations of the Geneva Conventions become harder to ignore.

Regulation is lagging behind the technology, and without urgent action, deepfakes may become as destructive as conventional weapons, redefining both warfare and the concept of truth.

The good side of deepfake technology

Yet, not all applications are harmful. In medicine, deepfakes can aid therapy or generate synthetic ECG data for research while protecting patient privacy. In education, the technology can recreate historical figures or deliver immersive experiences.

Journalists and human rights activists also use synthetic avatars for anonymity in repressive environments. Meanwhile, in entertainment, deepfakes offer cost-effective ways to recreate actors or build virtual sets.

These examples highlight how the same technology that fuels disinformation can also be harnessed for innovation and the public good.

Governments push for deepfake transparency

However, the risks are rising. Misinformation, fraud, nonconsensual content, and identity theft are all becoming more common.

The danger of copyright infringement and data privacy violations also looms large, particularly when AI-generated material pulls content from social media or copyrighted works without permission.

Policymakers are taking action, but is it enough?

The USA has banned AI robocalls, and Europe’s AI Act aims to regulate synthetic content. Experts emphasise the need for worldwide cooperation, with regulation focusing on consent, accountability, and transparency.

eu artificial intelligence act 415652543

Embedding watermarks and enforcing civil liabilities are among the strategies being considered. To navigate the new landscape, a collaborative effort across governments, industry, and the public is crucial, not just to detect deepfakes but also to define their responsible use.

Some emerging detection methods include certifying content provenance, where creators or custodians attach verifiable information about the origin and authenticity of media.

Automated detection systems analyse inconsistencies in facial movements, speech patterns, or visual blending to identify manipulated media. Additionally, platform moderation based on account reputation and behaviour helps filter suspicious sources.

Systems that process or store personal data must also comply with privacy regulations, ensuring individuals’ rights to correct or erase inaccurate data.

Yet, despite these efforts, many of these systems still struggle to reliably distinguish synthetic content from real one.

As detection methods lag, some organisations like Reality Defender and Witness work to raise awareness and develop countermeasures.

The rise of AI influencers on social media

Another subset of synthetic media is the AI-generated influencers. AI (or synthetic) influencers are virtual personas powered by AI, designed to interact with followers, create content, and promote brands across social media platforms.

Unlike traditional influencers, they are not real people but computer-generated characters that simulate human behaviour and emotional responses. Developers use deep learning, natural language processing, and sophisticated graphic design to make these influencers appear lifelike and relatable.

Finfluencers face legal action over unregulated financial advice.

Once launched, they operate continuously, often in multiple languages and across different time zones, giving brands a global presence without the limitations of human engagement.

These virtual influencers offer several key advantages for brands. They can be precisely controlled to maintain consistent messaging and avoid the unpredictability that can come with human influencers.

Their scalability allows them to reach diverse markets with tailored content, and over time, they may prove more cost-efficient due to their ability to produce content at scale without the ongoing costs of human talent.

Brands can also experiment with creative storytelling in new and visually compelling ways that might be difficult for real-life creators.

Synthetic influencers have also begun appearing in the healthcare sector, although their widespread popularity in the sector remains limited. However, it is expected to grow rapidly.

Their rise also brings significant challenges. AI influencers lack genuine authenticity and emotional depth, which can hinder the formation of meaningful connections with audiences.

Their use raises ethical concerns around transparency, especially if followers are unaware that they are interacting with AI.

Data privacy is another concern, as these systems often rely on collecting and analysing large amounts of user information to function effectively.

Additionally, while they may save money in the long run, creating and maintaining a sophisticated AI influencer involves a substantial upfront investment.

Study warns of backlash from synthetic influencers

A new study from Northeastern University urges caution when using AI-powered influencers, despite their futuristic appeal and rising prominence.

While these digital figures may offer brands a modern edge, they risk inflicting greater harm on consumer trust compared to human influencers when problems arise.

The findings show that consumers are more inclined to hold the brand accountable if a virtual influencer promotes a faulty product or spreads misleading information.

Rather than viewing these AI personas as independent agents, users tend to see them as direct reflections of the company behind them. Instead of blaming the influencer, audiences shift responsibility to the brand itself.

Interestingly, while human influencers are more likely to be held personally liable, virtual influencers still cause deeper reputational damage.

 Accessories, Jewelry

People assume that their actions are fully scripted and approved by the business, making any error seem deliberate or embedded in company practices rather than a personal mistake.

Regardless of the circumstances, AI influencers are reshaping the marketing landscape by providing an innovative and highly adaptable tool for brands. While they are unlikely to replace human influencers entirely, they are expected to play a growing role in digital marketing.

Their continued rise will likely force regulators, brands, and developers to establish clearer ethical standards and guidelines to ensure responsible and transparent use.

Shaping the future of synthetic media

In conclusion, the growing presence of synthetic media invites both excitement and reflection. As researchers, policymakers, and creators grapple with its implications, the challenge lies not in halting progress but in shaping it thoughtfully.

All forms of synthetic media, like any other form of technology, have a dual capacity to empower and exploit, demanding a new digital literacy — one that prioritises critical engagement, ethical responsibility, and cross-sector collaboration.

On the one hand, deepfakes threaten democratic stability, information integrity, and civilian safety, blurring the line between truth and fabrication in conflict, politics, and public discourse.

On the other hand, AI influencers are transforming marketing and entertainment by offering scalable, controllable, and hyper-curated personas that challenge notions of authenticity and human connection.

Rather than fearing the tools themselves, we as human beings need to focus on cultivating the norms and safeguards that determine how, and for whom, they are used. Ultimately, these tools are meant to enhance our way of life, not undermine it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-made music sparks emotional authenticity debate

Innovative AI group The Velvet Sundown has produced retro‑style blues‑rock and indie pop tracks entirely without human musicians.

While the music echoes 1970s and 80s sounds, discussions have emerged over whether these digital creations truly capture emotional depth.

Many listeners find the melodies catchy and stylistically accurate, yet some critics argue AI‑generated music lacks the spontaneity, lived experience and nuance that human artists bring.

Skeptics contend that an AI’s technical precision cannot replicate the intangible aspects of musical performance.

The rise of AI in music has opened creative possibilities but also triggered deeper cultural questions. Can technology ever replace genuine emotion, or will it remain a stylistic tool?

Experts agree human creativity remains vital, AI can enhance, but not fully substitute, the soul‑stirring impulse of authentic music-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI model detects infections from wound photos

Mayo Clinic researchers have developed an AI system capable of detecting surgical site infections from wound photographs submitted by patients. The model was trained using over 20,000 images from more than 6,000 persons across nine hospital locations.

The AI pipeline identifies whether a photo contains a surgical incision and then evaluates that incision for infection. Known as Vision Transformer, the model accurately recognises incisions and scores high in AUC in infection detection.

Medical staff review outpatient wound images manually, which can delay care and burden resources. Automating this process may improve early diagnosis, reduce unnecessary visits, and speed up responses to high-risk cases.

Researchers believe the tool could eventually serve as a frontline screening method, especially helpful in rural or understaffed areas. Consistent performance across diverse patient groups also suggests a lower risk of algorithmic bias, though further validation remains essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI locks down operations after DeepSeek model concerns

OpenAI has significantly tightened its internal security following reports that DeepSeek may have replicated its models. DeepSeek allegedly used distillation techniques to launch a competing product earlier this year, prompting a swift response.

OpenAI has introduced strict access protocols to prevent information leaks, including fingerprint scans, offline servers, and a policy restricting internet use without approval. Sensitive projects such as its AI o1 model are now discussed only by approved staff within designated areas.

The company has also boosted cybersecurity staffing and reinforced its data centre defences. Confidential development information is now shielded through ‘information tenting’.

These actions coincide with OpenAI’s $30 billion deal with Oracle to lease 4.5 gigawatts of data centre capacity across the United States. The partnership plays a central role in OpenAI’s growing Stargate infrastructure strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Phishing 2.0: How AI is making cyber scams more convincing

Phishing remains among the most widespread and dangerous cyber threats, especially for individuals and small businesses. These attacks rely on deception—emails, texts, or social messages that impersonate trusted sources to trick people into giving up sensitive information.

Cybercriminals exploit urgency and fear. A typical example is a fake email from a bank saying your account is at risk, prompting you to click a malicious link. Even when emails look legitimate, subtle details—like a strange sender address—can be red flags.

In one recent scam, Netflix users received fake alerts about payment failures. The link led to a fake login page where credentials and payment data were stolen. Similar tactics have been used against QuickBooks users, small businesses, and Microsoft 365 customers.

Small businesses are frequent targets due to limited security resources. Emails mimicking vendors or tech companies often trick employees into handing over credentials, giving attackers access to sensitive systems.

Phishing works because it preys on human psychology: trust, fear, and urgency. And with AI, attackers can now generate more convincing content, making detection harder than ever.

Protection starts with vigilance. Always check sender addresses, avoid clicking suspicious links, and enable multi-factor authentication (MFA). Employee training, secure protocols for sensitive requests, and phishing simulations are critical for businesses.

Phishing attacks will continue to grow in sophistication, but with awareness and layered security practices, users and businesses can stay ahead of the threat.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Groq opens AI data centre in Helsinki

Groq has opened its first European AI data centre in Helsinki, Finland, in collaboration with Equinix. The facility offers European users fast, secure, and low-latency AI inference services, aiming to improve performance and data governance.

The launch follows Groq’s existing partnership with Equinix, which already includes a site in Dallas. The new centre complements Groq’s global network, including facilities in the US, Canada and Saudi Arabia.

CEO Jonathan Ross stated the centre provides immediate infrastructure for developers building fast at scale. Equinix highlighted Finland’s reliable power and sustainable energy as key factors in the decision to host capacity there.

The data centre supports GroqCloud, delivering over 20 million tokens per second across its network. European businesses are expected to benefit from improved AI performance and operational efficiency.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Training AI sustainably depends on where and how

Organisations are urged to place AI model training in regions powered by renewable energy. For instance, training in Canada rather than Poland could reduce carbon emissions by approximately 85 %. Strategic location choices are becoming vital for greener AI.

Selecting appropriate hardware also plays a pivotal role. Research shows that a high-end NVIDIA H100 GPU carries three times the manufacturing carbon footprint of a more energy-efficient NVIDIA L4. Opting for the proper GPU can deliver performance without undue environmental cost.

Efficiency should be embedded at every stage of the AI process. From hardware procurement and algorithm design to operational deployment, even fractional improvements across the supply chain can significantly reduce overall carbon output, ensuring that today’s progress doesn’t harm tomorrow’s planet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Cyber Command proposes $5M AI Initiative for 2026 budget

US Cyber Command is seeking $5 million in its fiscal year 2026 budget to launch a new AI project to advance data integration and operational capabilities.

While the amount represents a small fraction of the command’s $1.3 billion research and development (R&D) portfolio, the effort reflects growing emphasis on incorporating AI into cyber operations.

The initiative follows congressional direction set in the fiscal year (FY) 2023 National Defense Authorization Act, which tasked Cyber Command and the Department of Defense’s Chief Information Officer—working with the Chief Digital and Artificial Intelligence Officer, DARPA, the NSA, and the Undersecretary of Defense for Research and Engineering—to produce a five-year guide and implementation plan for rapid AI adoption.

However, this roadmap, developed shortly after, identified priorities for deploying AI systems, applications, and supporting data processes across cyber forces.

Cyber Command formed an AI task force within its Cyber National Mission Force (CNMF) to operationalise these priorities. The newly proposed funding would support the task force’s efforts to establish core data standards, curate and tag operational data, and accelerate the integration of AI and machine learning solutions.

Known as Artificial Intelligence for Cyberspace Operations, the project will focus on piloting AI technologies using an agile 90-day cycle. This approach is designed to rapidly assess potential solutions against real-world use cases, enabling quick iteration in response to evolving cyber threats.

Budget documents indicate the CNMF plans to explore how AI can enhance threat detection, automate data analysis, and support decision-making processes. The command’s Cyber Immersion Laboratory will be essential in testing and evaluating these cyber capabilities, with external organisations conducting independent operational assessments.

The AI roadmap identifies five categories for applying AI across Cyber Command’s enterprise: vulnerabilities and exploits; network security, monitoring, and visualisation; modelling and predictive analytics; persona and identity management; and infrastructure and transport systems.

To fund this effort, Cyber Command plans to shift resources from its operations and maintenance account into its R&D budget as part of the transition from FY2025 to FY2026.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Pakistan launches AI customs system to tackle tax evasion

Pakistan has launched its first AI-powered Customs Clearance and Risk Management System (RMS) to cut tax evasion, reduce corruption, and modernise port operations by automating inspections and declarations.

The initiative, part of broader digital reforms, is led by the Federal Board of Revenue (FBR) with support from the Intelligence Bureau.

By minimising human involvement in customs procedures, the system enables faster, fairer, and more transparent processing. It uses AI and automated bots to assess goods’ value and classification, improve risk profiling, and streamline green channel clearances.

Early trials showed a 92% boost in system performance and more than double the efficiency in identifying compliant cargo.

Prime Minister Shehbaz Sharif praised the collaboration between the FBR and IB, calling the initiative a key pillar of national economic reform. He urged full integration of the system into the country’s digital infrastructure and reaffirmed tax reform as a government priority.

The AI system is also expected to close loopholes in under-invoicing and misdeclaration, which have long been used to avoid duties.

Meanwhile, video analytics technology is trialled to detect factory tax fraud, with early tests showing 98% accuracy. In recent enforcement efforts, authorities recovered Rs178 billion, highlighting the potential of data-driven approaches in tackling fiscal losses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wimbledon faces backlash over AI line judges after tech errors spark outrage

Wimbledon’s decision to fully replace human line judges with an AI-powered system has sparked growing discontent among players and fans.

Although designed for precision, the Hawk-Eye Live system has made questionable calls, been difficult to hear during matches, and even shut down unexpectedly, raising concerns about its reliability.

British players Jack Draper and Emma Raducanu both expressed frustration over key points lost due to what they believed were inaccurate calls. Sonay Kartal’s match was interrupted in a particularly disruptive incident when the AI system crashed mid-game, prompting organisers to apologise.

The All England Club defends the system as more impartial than human officials, but not everyone agrees. Over 300 line judges lost their jobs, and some staged protests outside the grounds.

With no way to challenge calls made by the machine, players say the system removes accountability and human judgement from the sport.

While Wimbledon continues to market the move as progress, critics argue that the tournament has sacrificed tradition and clarity for automation.

As other Grand Slams like the French Open retain human officials, questions remain over whether AI is improving the sport or changing it for the worse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!