AI-powered imposter poses as US Secretary of State Rubio

An imposter posing as US Secretary of State Marco Rubio used an AI-generated voice and text messages to contact high-ranking officials, including foreign ministers, a senator, and a state governor.

The messages, sent through SMS and the encrypted app Signal, triggered an internal warning across the US State Department, according to a classified cable dated 3 July.

The individual created a fake Signal account using the name ‘Marco.Rubio@state.gov’ and began contacting targets in mid-June.

At least two received AI-generated voicemails, while others were encouraged to continue the chat via Signal. US officials said the aim was likely to gain access to sensitive information or compromise official accounts.

The State Department confirmed it is investigating the breach and has urged all embassies and consulates to remain alert. While no direct cyber threat was found, the department warned that shared information could still be exposed if targets were deceived.

A spokesperson declined to provide further details for security reasons.

The incident appears linked to a broader wave of AI-driven disinformation. A second operation, possibly tied to Russian actors, reportedly targeted Gmail accounts of journalists and former officials.

The FBI has warned of rising cases of ‘smishing’ and ‘vishing’ involving AI-generated content.

Experts now warn that deepfakes are becoming harder to detect, as the technology advances faster than defences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The rise and risks of synthetic media

Synthetic media transforms content creation across sectors

The rapid development of AI has enabled significant breakthroughs in synthetic media, opening up new opportunities in healthcare, education, entertainment and many more.

Instead of relying on traditional content creation, companies are now using advanced tools to produce immersive experiences, training simulations and personalised campaigns. But what exactly is synthetic media?

Seattle-based ElastixAI raised $16 million to build a platform that improves how large language models run, focusing on efficient inference rather than training.

Synthetic media refers to content produced partly or entirely by AI, including AI-generated images, music, video and speech. Tools such as ChatGPT, Midjourney and voice synthesisers are now widely used in both creative and commercial settings.

The global market for synthetic media is expanding rapidly. Valued at USD 4.5 billion in 2023, it is projected to reach USD 16.6 billion by 2033, driven mainly by tools that convert text into images, videos or synthetic speech.

The appeal lies in its scalability and flexibility: small teams can now quickly produce a wide range of professional-grade content and easily adapt it for multiple audiences or languages.

However, as synthetic media becomes more widespread, so do the ethical challenges it poses.

How deepfakes threaten trust and security

The same technology has raised serious concerns as deepfakes – highly realistic but fake audio, images and videos – become harder to detect and more frequently misused.

Deepfakes, a subset of synthetic media, go a step further by creating content that intentionally imitates real people in deceptive ways, often for manipulation or fraud.

The technology behind deepfakes involves face swapping through variational autoencoders and voice cloning via synthesised speech patterns. The entry barrier is low, making these tools accessible to the general public.

computer keyboard with red deepfake button key deepfake dangers online

First surfacing on Reddit in 2017, deepfakes have quickly expanded into healthcare, entertainment, and education, yet they also pose a serious threat when misused. For example, a major financial scam recently cost a company USD 25 million due to a deepfaked video call with a fake CFO.

Synthetic media fuels global political narratives

Politicians and supporters have often openly used generative AI to share satirical or exaggerated content, rather than attempting to disguise it as real.

In Indonesia, AI even brought back the likeness of former dictator Suharto to endorse candidates, while in India, meme culture thrived but failed to significantly influence voters’ decisions.

In the USA, figures like Elon Musk and Donald Trump have embraced AI-generated memes and voice parodies to mock opponents or improve their public image.

AI, US elections, Deepfakes

While these tools have made it easier to create misinformation, researchers such as UC Berkeley’s Hany Farid argue that the greater threat lies in the gradual erosion of trust, rather than a single viral deepfake.

It is becoming increasingly difficult for users to distinguish truth from fiction, leading to a contaminated information environment that harms public discourse. Legal concerns, public scrutiny, and the proliferation of ‘cheapfakes’—manipulated media that do not rely on AI—may have limited the worst predictions.

Nonetheless, experts warn that the use of AI in campaigns will continue to become more sophisticated. Without clear regulation and ethical safeguards, future elections may not be able to prevent the disruptive influence of synthetic media as easily.

Children use AI to create harmful deepfakes

School-aged children are increasingly using AI tools to generate explicit deepfake images of their classmates, often targeting girls. What began as a novelty has become a new form of digital sexual abuse.

With just a smartphone and a popular app, teenagers can now create and share highly realistic fake nudes, turning moments of celebration, like a bat mitzvah photo, into weapons of humiliation.

Rather than being treated as simple pranks, these acts have severe psychological consequences for victims and are leaving lawmakers scrambling.

Educators and parents are now calling for urgent action. Instead of just warning teens about criminal consequences, schools are starting to teach digital ethics, consent, and responsible use of technology.

kids using laptops in class

Programmes that explain the harm caused by deepfakes may offer a better path forward than punishment alone. Experts say the core issues—respect, agency, and safety—are not new.

The tools may be more advanced, but the message remains the same: technology must be used responsibly, not to exploit others.

Deepfakes become weapons of modern war

Deepfakes can also be deployed to sow confusion, falsify military orders, and manipulate public opinion. While not all such tactics will succeed, their growing use in psychological and propaganda operations cannot be ignored.

Intelligence agencies are already exploring how to integrate synthetic media into information warfare strategies, despite the risk of backfiring.

A new academic study from University College Cork examined how such videos spread on social media and how users reacted.

While many responded with scepticism and attempts at verification, others began accusing the real footage of being fake. The growing confusion risks creating an online environment where no information feels trustworthy, exactly the outcome hostile actors might seek.

While deception has long been part of warfare, deepfakes challenge the legal boundaries defined by international humanitarian law.

 Crowd, Person, Adult, Male, Man, Press Conference, Head, Face, People

Falsifying surrender orders to launch ambushes could qualify as perfidy—a war crime—while misleading enemies about troop positions may remain lawful.

Yet when civilians are caught in the crossfire of digital lies, violations of the Geneva Conventions become harder to ignore.

Regulation is lagging behind the technology, and without urgent action, deepfakes may become as destructive as conventional weapons, redefining both warfare and the concept of truth.

The good side of deepfake technology

Yet, not all applications are harmful. In medicine, deepfakes can aid therapy or generate synthetic ECG data for research while protecting patient privacy. In education, the technology can recreate historical figures or deliver immersive experiences.

Journalists and human rights activists also use synthetic avatars for anonymity in repressive environments. Meanwhile, in entertainment, deepfakes offer cost-effective ways to recreate actors or build virtual sets.

These examples highlight how the same technology that fuels disinformation can also be harnessed for innovation and the public good.

Governments push for deepfake transparency

However, the risks are rising. Misinformation, fraud, nonconsensual content, and identity theft are all becoming more common.

The danger of copyright infringement and data privacy violations also looms large, particularly when AI-generated material pulls content from social media or copyrighted works without permission.

Policymakers are taking action, but is it enough?

The USA has banned AI robocalls, and Europe’s AI Act aims to regulate synthetic content. Experts emphasise the need for worldwide cooperation, with regulation focusing on consent, accountability, and transparency.

eu artificial intelligence act 415652543

Embedding watermarks and enforcing civil liabilities are among the strategies being considered. To navigate the new landscape, a collaborative effort across governments, industry, and the public is crucial, not just to detect deepfakes but also to define their responsible use.

Some emerging detection methods include certifying content provenance, where creators or custodians attach verifiable information about the origin and authenticity of media.

Automated detection systems analyse inconsistencies in facial movements, speech patterns, or visual blending to identify manipulated media. Additionally, platform moderation based on account reputation and behaviour helps filter suspicious sources.

Systems that process or store personal data must also comply with privacy regulations, ensuring individuals’ rights to correct or erase inaccurate data.

Yet, despite these efforts, many of these systems still struggle to reliably distinguish synthetic content from real one.

As detection methods lag, some organisations like Reality Defender and Witness work to raise awareness and develop countermeasures.

The rise of AI influencers on social media

Another subset of synthetic media is the AI-generated influencers. AI (or synthetic) influencers are virtual personas powered by AI, designed to interact with followers, create content, and promote brands across social media platforms.

Unlike traditional influencers, they are not real people but computer-generated characters that simulate human behaviour and emotional responses. Developers use deep learning, natural language processing, and sophisticated graphic design to make these influencers appear lifelike and relatable.

Finfluencers face legal action over unregulated financial advice.

Once launched, they operate continuously, often in multiple languages and across different time zones, giving brands a global presence without the limitations of human engagement.

These virtual influencers offer several key advantages for brands. They can be precisely controlled to maintain consistent messaging and avoid the unpredictability that can come with human influencers.

Their scalability allows them to reach diverse markets with tailored content, and over time, they may prove more cost-efficient due to their ability to produce content at scale without the ongoing costs of human talent.

Brands can also experiment with creative storytelling in new and visually compelling ways that might be difficult for real-life creators.

Synthetic influencers have also begun appearing in the healthcare sector, although their widespread popularity in the sector remains limited. However, it is expected to grow rapidly.

Their rise also brings significant challenges. AI influencers lack genuine authenticity and emotional depth, which can hinder the formation of meaningful connections with audiences.

Their use raises ethical concerns around transparency, especially if followers are unaware that they are interacting with AI.

Data privacy is another concern, as these systems often rely on collecting and analysing large amounts of user information to function effectively.

Additionally, while they may save money in the long run, creating and maintaining a sophisticated AI influencer involves a substantial upfront investment.

Study warns of backlash from synthetic influencers

A new study from Northeastern University urges caution when using AI-powered influencers, despite their futuristic appeal and rising prominence.

While these digital figures may offer brands a modern edge, they risk inflicting greater harm on consumer trust compared to human influencers when problems arise.

The findings show that consumers are more inclined to hold the brand accountable if a virtual influencer promotes a faulty product or spreads misleading information.

Rather than viewing these AI personas as independent agents, users tend to see them as direct reflections of the company behind them. Instead of blaming the influencer, audiences shift responsibility to the brand itself.

Interestingly, while human influencers are more likely to be held personally liable, virtual influencers still cause deeper reputational damage.

 Accessories, Jewelry

People assume that their actions are fully scripted and approved by the business, making any error seem deliberate or embedded in company practices rather than a personal mistake.

Regardless of the circumstances, AI influencers are reshaping the marketing landscape by providing an innovative and highly adaptable tool for brands. While they are unlikely to replace human influencers entirely, they are expected to play a growing role in digital marketing.

Their continued rise will likely force regulators, brands, and developers to establish clearer ethical standards and guidelines to ensure responsible and transparent use.

Shaping the future of synthetic media

In conclusion, the growing presence of synthetic media invites both excitement and reflection. As researchers, policymakers, and creators grapple with its implications, the challenge lies not in halting progress but in shaping it thoughtfully.

All forms of synthetic media, like any other form of technology, have a dual capacity to empower and exploit, demanding a new digital literacy — one that prioritises critical engagement, ethical responsibility, and cross-sector collaboration.

On the one hand, deepfakes threaten democratic stability, information integrity, and civilian safety, blurring the line between truth and fabrication in conflict, politics, and public discourse.

On the other hand, AI influencers are transforming marketing and entertainment by offering scalable, controllable, and hyper-curated personas that challenge notions of authenticity and human connection.

Rather than fearing the tools themselves, we as human beings need to focus on cultivating the norms and safeguards that determine how, and for whom, they are used. Ultimately, these tools are meant to enhance our way of life, not undermine it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk’s chatbot Grok removes offensive content

Elon Musk’s AI chatbot Grok has removed several controversial posts after they were flagged as anti-Semitic and accused of praising Adolf Hitler.

The deletions followed backlash from users on X and criticism from the Anti-Defamation League (ADL), which condemned the language as dangerous and extremist.

Grok, developed by Musk’s xAI company, sparked outrage after stating Hitler would be well-suited to tackle anti-White hatred and claiming he would ‘handle it decisively’. The chatbot also made troubling comments about Jewish surnames and referred to Hitler as ‘history’s moustache man’.

In response, xAI acknowledged the issue and said it had begun filtering out hate speech before posts go live. The company credited user feedback for helping identify weaknesses in Grok’s training data and pledged ongoing updates to improve the model’s accuracy.

The ADL criticised the chatbot’s behaviour as ‘irresponsible’ and warned that such AI-generated rhetoric fuels rising anti-Semitism online.

It is not the first time Grok has been caught in controversy — earlier this year, the bot repeated White genocide conspiracy theories, which xAI blamed on an unauthorised software change.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT quietly tests new ‘Study Together’ feature for education

A few ChatGPT users have noticed a new option called ‘Study Together’ appearing among available tools, though OpenAI has yet to confirm any official rollout. The feature seems designed to make ChatGPT a more interactive educational companion than just delivering instant answers.

Rather than offering direct solutions, the tool prompts users to think for themselves by asking questions, potentially turning ChatGPT into a digital tutor.

Some speculate the mode might eventually allow multiple users to study together in real-time, mimicking a virtual study group environment.

With the chatbot already playing a significant role in classrooms — helping teachers plan lessons or assisting students with homework — the ‘Study Together’ feature might help guide users toward deeper learning instead of enabling shortcuts.

Critics have warned that AI tools like ChatGPT risk undermining education, so it could be a strategic shift to encourage more constructive academic use.

OpenAI has not confirmed when or if the feature will launch publicly, or whether it will be limited to ChatGPT Plus users. When asked, ChatGPT only replied that nothing had been officially announced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Phishing 2.0: How AI is making cyber scams more convincing

Phishing remains among the most widespread and dangerous cyber threats, especially for individuals and small businesses. These attacks rely on deception—emails, texts, or social messages that impersonate trusted sources to trick people into giving up sensitive information.

Cybercriminals exploit urgency and fear. A typical example is a fake email from a bank saying your account is at risk, prompting you to click a malicious link. Even when emails look legitimate, subtle details—like a strange sender address—can be red flags.

In one recent scam, Netflix users received fake alerts about payment failures. The link led to a fake login page where credentials and payment data were stolen. Similar tactics have been used against QuickBooks users, small businesses, and Microsoft 365 customers.

Small businesses are frequent targets due to limited security resources. Emails mimicking vendors or tech companies often trick employees into handing over credentials, giving attackers access to sensitive systems.

Phishing works because it preys on human psychology: trust, fear, and urgency. And with AI, attackers can now generate more convincing content, making detection harder than ever.

Protection starts with vigilance. Always check sender addresses, avoid clicking suspicious links, and enable multi-factor authentication (MFA). Employee training, secure protocols for sensitive requests, and phishing simulations are critical for businesses.

Phishing attacks will continue to grow in sophistication, but with awareness and layered security practices, users and businesses can stay ahead of the threat.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO pushes for digital trust at WSIS+20

At the WSIS+20 High-Level Event in Geneva, UNESCO convened a timely session exploring how to strengthen global information ecosystems through responsible platform governance and smart technology use. The discussion, titled ‘Towards a Resilient Information Ecosystem’, brought together international regulators, academics, civil society leaders, and tech industry representatives to assess digital media’s role in shaping public discourse, especially in times of crisis.

UNESCO’s Assistant Director General Tawfik Jelassi emphasised the organisation’s longstanding mission to build peace through knowledge sharing, warning that digital platforms now risk becoming breeding grounds for misinformation, hate speech, and division. To counter this, he highlighted UNESCO’s ‘Internet for Trust’ initiative, which produced governance guidelines informed by over 10,000 global contributions.

Speakers called for a shift from viewing misinformation as an isolated problem to understanding the broader digital communication ecosystem, especially during crises such as wars or natural disasters. Professor Ingrid Volkmer stressed that global monopolies like Starlink, Amazon Web Services, and OpenAI dominate critical communication infrastructure, often without sufficient oversight.

She urged a paradigm shift that treats crisis communication as an interconnected system requiring tailored regulation and risk assessments. France’s digital regulator Frédéric Bokobza outlined the European Digital Services Act’s role in enhancing transparency and accountability, noting the importance of establishing direct cooperation with platforms, particularly during elections.

The panel also spotlighted ways to empower users. Google’s Nadja Blagojevic showcased initiatives like SynthID watermarking for AI-generated content and media literacy programs such as ‘Be Internet Awesome,’ which aim to build digital critical thinking skills across age groups.

Meanwhile, Maria Paz Canales from Global Partners Digital offered a civil society perspective, sharing how AI tools protect protestors’ identities, preserve historical memory, and amplify marginalised voices, even amid funding challenges. She also called for regulatory models distinguishing between traditional commercial media and true public interest journalism, particularly in underrepresented regions like Latin America.

The session concluded with a strong call for international collaboration among regulators and platforms, affirming that information should be treated as a public good. Participants underscored the need for inclusive, multistakeholder governance and sustainable support for independent media to protect democratic values in an increasingly digital world.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Preserving languages in a digital world: A call for inclusive action

At the WSIS+20 High-Level Event in Geneva, UNESCO convened a powerful session on the critical need to protect multilingualism in the digital age. With over 8,000 languages spoken globally but fewer than 120 represented online, the panel warned of a growing digital divide that excludes billions and marginalises thousands of cultures.

Dr Tawfik Jelassi of UNESCO painted a vivid metaphor of the internet as a vast library where most languages have no books on the shelves, calling for urgent action to safeguard humanity’s linguistic and cultural diversity.

Speakers underscored that bridging this divide goes beyond creating language tools—it requires systemic change rooted in policy, education, and community empowerment. Guilherme Canela of UNESCO highlighted ongoing initiatives like the 2003 Recommendation on Multilingualism and the UN Decade of Indigenous Languages, which has already inspired 15 national action plans.

Panellists like Valts Ernstreits and Sofiya Zahova emphasised community-led efforts, citing examples from Latvia, Iceland, and Sámi institutions that show how native speakers and local institutions must lead digital inclusion efforts.

Africa’s case brought the urgency into sharp focus. David Waweru noted that despite hosting a third of the world’s languages, less than 0.1% of websites feature African language content. Yet, promising efforts like the African Storybook project and AI language models show how local storytelling and education can thrive in digital spaces.

Elena Plexida of ICANN revealed that only 26% of email servers accept non-Latin addresses, a stark reminder of the structural barriers to full digital participation.

The session concluded with a strong call for multistakeholder collaboration. Governments, tech companies, indigenous communities, and civil society must work together to make multilingualism the default, not the exception, in digital spaces. As Jelassi put it, ensuring every language has a place online is not just a technical challenge but a matter of cultural survival and digital justice.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

East Meets West: Reimagining education in the age of AI

At the WSIS+20 High-Level Event in Geneva, the session ‘AI (and) education: Convergences between Chinese and European pedagogical practices’ brought together educators, students, and industry experts to examine how AI reshapes global education.

Led by Jovan Kurbalija of Diplo and Professor Hao Liu of Beijing Institute of Technology (BIT), with industry insights from Deloitte’s Norman Sze, the discussion focused on the future of universities and the evolving role of professors amid rapid AI developments.

Drawing on philosophical traditions from Confucius to Plato, the session emphasised the need for a hybrid approach that preserves the human essence of learning while embracing technological transformation.

Professor Liu showcased BIT’s ‘intelligent education’ model, a human-centred system integrating time, space, knowledge, teachers, and students. Moving beyond rigid, exam-focused instruction, BIT promotes creativity and interdisciplinary learning, empowering students with flexible academic paths and digital tools.

Jovan Kurbalija at WSIS+20 High-Level Event 2025
Jovan Kurbalija, Executive Director of Diplo

Meanwhile, Norman Sze highlighted how AI has accelerated industry workflows and called for educational alignment with real-world demands. He argued for reorienting learning around critical thinking, ethical literacy, and collaboration—skills that AI cannot replicate and remain central to personal and professional growth.

A key theme was whether teachers and universities remain relevant in an AI-driven future. Students from around the world contributed compelling reflections: AI may offer efficiency, but it cannot replace the emotional intelligence, mentorship, and meaning-making that only human educators provide.

As one student said, ‘I don’t care about ChatGPT—it’s not human.’ The group reached a consensus: professors must shift from ‘sages on the stage’ to ‘guides on the side,’ coaching students through complexity rather than merely transmitting knowledge.

The session closed on an optimistic note, asserting that while AI is a powerful catalyst for change, the heart of education lies in human connection, dialogue, and the ability to ask the right questions. Participants agreed that a truly forward-looking educational model will emerge not from choosing between East and West or human and machine, but from integrating the best of all to build a more inclusive and insightful future of learning.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Google hit with EU complaint over AI Overviews

After a formal filing by the Independent Publishers Alliance, Google has faced an antitrust complaint in the European Union over its AI Overviews feature.

The group alleges that Google has been using web content without proper consent to power its AI-generated summaries, causing considerable harm to online publishers.

The complaint claims that publishers have lost traffic, readers and advertising revenue due to these summaries. It also argues that opting out of AI Overviews is not a real choice unless publishers are prepared to vanish entirely from Google’s search results.

AI Overviews were launched over a year ago and now appear at the top of many search queries, summarising information using AI. Although the tool has expanded rapidly, critics argue it drives users away from original publisher websites, especially news outlets.

Google has responded by stating its AI search tools allow users to ask more complex questions and help businesses and creators get discovered. The tech giant also insisted that web traffic patterns are influenced by many factors and warned against conclusions based on limited data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Regions seek role in EU hospital cyber strategy

The European Commission’s latest plan to strengthen hospital cybersecurity has drawn attention from regional authorities across the EU, who say they were excluded from key decisions.

Their absence, they argue, could weaken the strategy’s overall effectiveness.

With cyberattacks on healthcare systems growing, regional representatives insist they should have a seat at the table.

As those directly managing hospitals and public health, they warn that top-down decisions may overlook urgent local challenges and lead to poorly matched policies.

The Commission’s plan includes creating a dedicated health cybersecurity centre under the EU Agency for Cybersecurity (ENISA) and setting up an EU-wide threat alert system.

Yet doubts remain over how these goals will be met without extra funding or clear guidance on regional involvement.

The concerns point to the need for a more collaborative approach that values regional knowledge.

Without it, the EU risks designing cybersecurity protections that fail to reflect the realities inside Europe’s hospitals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!