Fraudsters exploit dormant Bitcoin addresses to steal data

Analysts at BitMEX Research have revealed a new scam aimed at early Bitcoin holders, particularly those with dormant wallets dating back to 2011. Attackers use Bitcoin’s OP_Return field to send false transactions and messages to deceive owners into sharing sensitive data.

One high-profile victim is the ‘1Feex’ wallet, known for holding around 80,000 BTC stolen from the Mt. Gox hack.

Scammers made a fake Salomon Brothers site claiming that wallets are abandoned unless owners prove ownership with signed messages or personal documents. The site bears no genuine link to the original financial firm or its former executives.

Crypto community members recommend a safer approach: moving a small amount of Bitcoin to demonstrate wallet activity instead of risking the full balance. BitMEX urges users to avoid interacting with fake sites or sharing personal data.

The scam exemplifies growing sophistication in crypto fraud, with losses exceeding $2.1 billion in just the first half of 2025.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The rise and risks of synthetic media

Synthetic media transforms content creation across sectors

The rapid development of AI has enabled significant breakthroughs in synthetic media, opening up new opportunities in healthcare, education, entertainment and many more.

Instead of relying on traditional content creation, companies are now using advanced tools to produce immersive experiences, training simulations and personalised campaigns. But what exactly is synthetic media?

Seattle-based ElastixAI raised $16 million to build a platform that improves how large language models run, focusing on efficient inference rather than training.

Synthetic media refers to content produced partly or entirely by AI, including AI-generated images, music, video and speech. Tools such as ChatGPT, Midjourney and voice synthesisers are now widely used in both creative and commercial settings.

The global market for synthetic media is expanding rapidly. Valued at USD 4.5 billion in 2023, it is projected to reach USD 16.6 billion by 2033, driven mainly by tools that convert text into images, videos or synthetic speech.

The appeal lies in its scalability and flexibility: small teams can now quickly produce a wide range of professional-grade content and easily adapt it for multiple audiences or languages.

However, as synthetic media becomes more widespread, so do the ethical challenges it poses.

How deepfakes threaten trust and security

The same technology has raised serious concerns as deepfakes – highly realistic but fake audio, images and videos – become harder to detect and more frequently misused.

Deepfakes, a subset of synthetic media, go a step further by creating content that intentionally imitates real people in deceptive ways, often for manipulation or fraud.

The technology behind deepfakes involves face swapping through variational autoencoders and voice cloning via synthesised speech patterns. The entry barrier is low, making these tools accessible to the general public.

computer keyboard with red deepfake button key deepfake dangers online

First surfacing on Reddit in 2017, deepfakes have quickly expanded into healthcare, entertainment, and education, yet they also pose a serious threat when misused. For example, a major financial scam recently cost a company USD 25 million due to a deepfaked video call with a fake CFO.

Synthetic media fuels global political narratives

Politicians and supporters have often openly used generative AI to share satirical or exaggerated content, rather than attempting to disguise it as real.

In Indonesia, AI even brought back the likeness of former dictator Suharto to endorse candidates, while in India, meme culture thrived but failed to significantly influence voters’ decisions.

In the USA, figures like Elon Musk and Donald Trump have embraced AI-generated memes and voice parodies to mock opponents or improve their public image.

AI, US elections, Deepfakes

While these tools have made it easier to create misinformation, researchers such as UC Berkeley’s Hany Farid argue that the greater threat lies in the gradual erosion of trust, rather than a single viral deepfake.

It is becoming increasingly difficult for users to distinguish truth from fiction, leading to a contaminated information environment that harms public discourse. Legal concerns, public scrutiny, and the proliferation of ‘cheapfakes’—manipulated media that do not rely on AI—may have limited the worst predictions.

Nonetheless, experts warn that the use of AI in campaigns will continue to become more sophisticated. Without clear regulation and ethical safeguards, future elections may not be able to prevent the disruptive influence of synthetic media as easily.

Children use AI to create harmful deepfakes

School-aged children are increasingly using AI tools to generate explicit deepfake images of their classmates, often targeting girls. What began as a novelty has become a new form of digital sexual abuse.

With just a smartphone and a popular app, teenagers can now create and share highly realistic fake nudes, turning moments of celebration, like a bat mitzvah photo, into weapons of humiliation.

Rather than being treated as simple pranks, these acts have severe psychological consequences for victims and are leaving lawmakers scrambling.

Educators and parents are now calling for urgent action. Instead of just warning teens about criminal consequences, schools are starting to teach digital ethics, consent, and responsible use of technology.

kids using laptops in class

Programmes that explain the harm caused by deepfakes may offer a better path forward than punishment alone. Experts say the core issues—respect, agency, and safety—are not new.

The tools may be more advanced, but the message remains the same: technology must be used responsibly, not to exploit others.

Deepfakes become weapons of modern war

Deepfakes can also be deployed to sow confusion, falsify military orders, and manipulate public opinion. While not all such tactics will succeed, their growing use in psychological and propaganda operations cannot be ignored.

Intelligence agencies are already exploring how to integrate synthetic media into information warfare strategies, despite the risk of backfiring.

A new academic study from University College Cork examined how such videos spread on social media and how users reacted.

While many responded with scepticism and attempts at verification, others began accusing the real footage of being fake. The growing confusion risks creating an online environment where no information feels trustworthy, exactly the outcome hostile actors might seek.

While deception has long been part of warfare, deepfakes challenge the legal boundaries defined by international humanitarian law.

 Crowd, Person, Adult, Male, Man, Press Conference, Head, Face, People

Falsifying surrender orders to launch ambushes could qualify as perfidy—a war crime—while misleading enemies about troop positions may remain lawful.

Yet when civilians are caught in the crossfire of digital lies, violations of the Geneva Conventions become harder to ignore.

Regulation is lagging behind the technology, and without urgent action, deepfakes may become as destructive as conventional weapons, redefining both warfare and the concept of truth.

The good side of deepfake technology

Yet, not all applications are harmful. In medicine, deepfakes can aid therapy or generate synthetic ECG data for research while protecting patient privacy. In education, the technology can recreate historical figures or deliver immersive experiences.

Journalists and human rights activists also use synthetic avatars for anonymity in repressive environments. Meanwhile, in entertainment, deepfakes offer cost-effective ways to recreate actors or build virtual sets.

These examples highlight how the same technology that fuels disinformation can also be harnessed for innovation and the public good.

Governments push for deepfake transparency

However, the risks are rising. Misinformation, fraud, nonconsensual content, and identity theft are all becoming more common.

The danger of copyright infringement and data privacy violations also looms large, particularly when AI-generated material pulls content from social media or copyrighted works without permission.

Policymakers are taking action, but is it enough?

The USA has banned AI robocalls, and Europe’s AI Act aims to regulate synthetic content. Experts emphasise the need for worldwide cooperation, with regulation focusing on consent, accountability, and transparency.

eu artificial intelligence act 415652543

Embedding watermarks and enforcing civil liabilities are among the strategies being considered. To navigate the new landscape, a collaborative effort across governments, industry, and the public is crucial, not just to detect deepfakes but also to define their responsible use.

Some emerging detection methods include certifying content provenance, where creators or custodians attach verifiable information about the origin and authenticity of media.

Automated detection systems analyse inconsistencies in facial movements, speech patterns, or visual blending to identify manipulated media. Additionally, platform moderation based on account reputation and behaviour helps filter suspicious sources.

Systems that process or store personal data must also comply with privacy regulations, ensuring individuals’ rights to correct or erase inaccurate data.

Yet, despite these efforts, many of these systems still struggle to reliably distinguish synthetic content from real one.

As detection methods lag, some organisations like Reality Defender and Witness work to raise awareness and develop countermeasures.

The rise of AI influencers on social media

Another subset of synthetic media is the AI-generated influencers. AI (or synthetic) influencers are virtual personas powered by AI, designed to interact with followers, create content, and promote brands across social media platforms.

Unlike traditional influencers, they are not real people but computer-generated characters that simulate human behaviour and emotional responses. Developers use deep learning, natural language processing, and sophisticated graphic design to make these influencers appear lifelike and relatable.

Finfluencers face legal action over unregulated financial advice.

Once launched, they operate continuously, often in multiple languages and across different time zones, giving brands a global presence without the limitations of human engagement.

These virtual influencers offer several key advantages for brands. They can be precisely controlled to maintain consistent messaging and avoid the unpredictability that can come with human influencers.

Their scalability allows them to reach diverse markets with tailored content, and over time, they may prove more cost-efficient due to their ability to produce content at scale without the ongoing costs of human talent.

Brands can also experiment with creative storytelling in new and visually compelling ways that might be difficult for real-life creators.

Synthetic influencers have also begun appearing in the healthcare sector, although their widespread popularity in the sector remains limited. However, it is expected to grow rapidly.

Their rise also brings significant challenges. AI influencers lack genuine authenticity and emotional depth, which can hinder the formation of meaningful connections with audiences.

Their use raises ethical concerns around transparency, especially if followers are unaware that they are interacting with AI.

Data privacy is another concern, as these systems often rely on collecting and analysing large amounts of user information to function effectively.

Additionally, while they may save money in the long run, creating and maintaining a sophisticated AI influencer involves a substantial upfront investment.

Study warns of backlash from synthetic influencers

A new study from Northeastern University urges caution when using AI-powered influencers, despite their futuristic appeal and rising prominence.

While these digital figures may offer brands a modern edge, they risk inflicting greater harm on consumer trust compared to human influencers when problems arise.

The findings show that consumers are more inclined to hold the brand accountable if a virtual influencer promotes a faulty product or spreads misleading information.

Rather than viewing these AI personas as independent agents, users tend to see them as direct reflections of the company behind them. Instead of blaming the influencer, audiences shift responsibility to the brand itself.

Interestingly, while human influencers are more likely to be held personally liable, virtual influencers still cause deeper reputational damage.

 Accessories, Jewelry

People assume that their actions are fully scripted and approved by the business, making any error seem deliberate or embedded in company practices rather than a personal mistake.

Regardless of the circumstances, AI influencers are reshaping the marketing landscape by providing an innovative and highly adaptable tool for brands. While they are unlikely to replace human influencers entirely, they are expected to play a growing role in digital marketing.

Their continued rise will likely force regulators, brands, and developers to establish clearer ethical standards and guidelines to ensure responsible and transparent use.

Shaping the future of synthetic media

In conclusion, the growing presence of synthetic media invites both excitement and reflection. As researchers, policymakers, and creators grapple with its implications, the challenge lies not in halting progress but in shaping it thoughtfully.

All forms of synthetic media, like any other form of technology, have a dual capacity to empower and exploit, demanding a new digital literacy — one that prioritises critical engagement, ethical responsibility, and cross-sector collaboration.

On the one hand, deepfakes threaten democratic stability, information integrity, and civilian safety, blurring the line between truth and fabrication in conflict, politics, and public discourse.

On the other hand, AI influencers are transforming marketing and entertainment by offering scalable, controllable, and hyper-curated personas that challenge notions of authenticity and human connection.

Rather than fearing the tools themselves, we as human beings need to focus on cultivating the norms and safeguards that determine how, and for whom, they are used. Ultimately, these tools are meant to enhance our way of life, not undermine it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybercrime soars as firms underfund defences

Nearly four in ten UK businesses (38 %) do not allocate a dedicated cybersecurity budget, even as cybercrime costs hit an estimated £64 billion over three years.

Smaller enterprises are particularly vulnerable, with 15 % reporting breaches linked to underfunding.

Almost half of organisations (45 %) rely solely on in‑house defences, with only 8 % securing standalone cyber insurance, exposing many to evolving threats.

Common attacks include phishing campaigns, AI‑powered malware and DDoS, yet cybersecurity typically receives just 11 % of IT budgets.

Security professionals call for stronger board‑level involvement and increased collaboration with specialists and regulators.

They caution that businesses risk suffering further financial and reputational damage without proactive budgeting and external expertise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers ramp up attacks on employee credentials

Recent research highlights a surge in identity‑focused cyberattacks aimed at stealing employee credentials.

Corporate login information is harvested using sophisticated tools like infostealer malware, phishing campaigns, and automated credential stuffing.

Security experts warn that compromised credentials allow attackers to masquerade as staff, access internal systems, and move laterally across organisations.

While some major firms rely solely on passwords, rigorous measures such as strong multifactor authentication, proactive monitoring, and cyber awareness training are more effective defences.

Despite awareness of these threats, many companies do not thoroughly scan for leaked credentials or flag suspicious login activity promptly.

However, this hesitancy often stems from budget limitations, competing priorities or bureaucratic inertia.

Security specialists stress the need for coordinated investment in layered security measures to protect against evolving identity‑based attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI locks down operations after DeepSeek model concerns

OpenAI has significantly tightened its internal security following reports that DeepSeek may have replicated its models. DeepSeek allegedly used distillation techniques to launch a competing product earlier this year, prompting a swift response.

OpenAI has introduced strict access protocols to prevent information leaks, including fingerprint scans, offline servers, and a policy restricting internet use without approval. Sensitive projects such as its AI o1 model are now discussed only by approved staff within designated areas.

The company has also boosted cybersecurity staffing and reinforced its data centre defences. Confidential development information is now shielded through ‘information tenting’.

These actions coincide with OpenAI’s $30 billion deal with Oracle to lease 4.5 gigawatts of data centre capacity across the United States. The partnership plays a central role in OpenAI’s growing Stargate infrastructure strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Phishing 2.0: How AI is making cyber scams more convincing

Phishing remains among the most widespread and dangerous cyber threats, especially for individuals and small businesses. These attacks rely on deception—emails, texts, or social messages that impersonate trusted sources to trick people into giving up sensitive information.

Cybercriminals exploit urgency and fear. A typical example is a fake email from a bank saying your account is at risk, prompting you to click a malicious link. Even when emails look legitimate, subtle details—like a strange sender address—can be red flags.

In one recent scam, Netflix users received fake alerts about payment failures. The link led to a fake login page where credentials and payment data were stolen. Similar tactics have been used against QuickBooks users, small businesses, and Microsoft 365 customers.

Small businesses are frequent targets due to limited security resources. Emails mimicking vendors or tech companies often trick employees into handing over credentials, giving attackers access to sensitive systems.

Phishing works because it preys on human psychology: trust, fear, and urgency. And with AI, attackers can now generate more convincing content, making detection harder than ever.

Protection starts with vigilance. Always check sender addresses, avoid clicking suspicious links, and enable multi-factor authentication (MFA). Employee training, secure protocols for sensitive requests, and phishing simulations are critical for businesses.

Phishing attacks will continue to grow in sophistication, but with awareness and layered security practices, users and businesses can stay ahead of the threat.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Azerbaijan’s State Security Service tackles surveillance camera cyber breach

Azerbaijan’s State Security Service has disrupted a significant cybersecurity breach targeting surveillance cameras nationwide. The agency says unauthorised remote access had allowed attackers to capture and leak footage of private homes and offices.

The attackers exploited a digital video recorder (DVR) system vulnerability, intercepting live camera feeds. Footage of private family life was reportedly uploaded to foreign websites and even sold online.

In response, the State Security Service of Azerbaijan coordinated with other state bodies to identify compromised systems and locations. Technical inspections revealed a widespread security flaw in the surveillance devices.

The vulnerability was reported to the foreign manufacturer of the equipment, with an urgent request for a fix. Illegally uploaded footage has since been removed from affected platforms.

Citizens are urged to avoid using devices of unknown origin and follow best practices when managing digital systems. Authorities emphasised the importance of protecting personal data and maintaining cyber hygiene.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Scammers use fake celebrities to steal millions in crypto fraud

Fraudsters increasingly pretend to be celebrities to deceive people into fake cryptocurrency schemes. Richard Lyons lost $10,000 after falling for a scam involving a fake Elon Musk, who used an AI-generated voice and images to make the investment offer appear authentic.

The FBI has highlighted a sharp rise in crypto scams during 2024, with billions lost as fraudsters pose as financial experts or love interests. Many scams involve fake websites that mimic legitimate investment platforms, showing false gains before stealing funds.

Lyons was shown a fake web page indicating his investment had grown to $50,000 before the scam was uncovered.

Experts warn that thorough research and caution are essential when approached online with investment offers. The FBI urges potential investors to consult trusted advisers and avoid sending money to strangers.

Blockchain firms like Lionsgate Network now offer rapid tracing of stolen crypto, although recovery is usually limited to high-value cases.

Lyons described the scam’s impact as devastating, leaving him struggling with everyday expenses. Authorities advise anyone targeted by similar frauds to report promptly for a better chance of recovery and protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Court convicts duo in UK crypto fraud worth $2 million

A UK court has sentenced Raymondip Bedi and Patrick Mavanga to a combined 12 years in prison for a cryptocurrency fraud scheme that defrauded 65 victims of around $2 million. Between 2017 and 2019, the pair cold-called investors posing as advisers, which led them to fake crypto websites.

Bedi was sentenced to five years and four months, while Mavanga received six years and six months behind bars. Both operated under CCX Capital and Astaria Group LLP, deliberately bypassing financial regulations to extract illicit gains.

The scam targeted retail investors with little crypto experience, luring them with promises of high profits and misleading sales materials.

Victim impact statements revealed severe financial and emotional consequences. Some lost their life savings or fell into debt, and others developed mental health issues. Mavanga was also found guilty of hiding incriminating evidence during the investigation.

The Financial Conduct Authority (FCA) led the prosecution amid a heavy backlog of crypto fraud cases, exposing regulators’ challenges in enforcing laws.

The court encouraged victims to seek support and highlighted the need for vigilance against similar scams. While the prosecution offers closure for some, the lengthy process underscores the ongoing difficulties in policing the fast-evolving crypto market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Three nations outline cyber law views ahead of UN talks

In the lead-up to the concluding session of the UN Open-Ended Working Group (OEWG) on ICTs, Thailand, New Zealand, and South Korea have released their respective national positions on the application of international law in cyberspace, contributing to the growing corpus of state practice on the issue.

Thailand’s position (July 2025) emphasises that existing international law, including the Charter of the UN, applies to the conduct of States in cyberspace. Speaking of international humanitarian law (IHL), Thailand stresses that the IHL applies to cyber operations conducted in the context of armed conflicts and all forms of warfare, including cyberwarfare. Thailand also affirms that sovereignty applies in full to state activities conducted in cyberspace, and even if the cyber operation does not rise to the level of a prohibited use of force under international law, such an act still amounts to an internationally wrongful act.

New Zealand’s updated statement builds upon its 2020 position by reaffirming that international law applies to cyberspace “in the same way it applies in the physical world.” It provides expanded commentary on the principles of sovereignty and due diligence, explicitly recognising that New Zealand

does not consider that territorial sovereignty prohibits every unauthorised intrusion into a foreign ICT system or prohibits all cyber activity which has effects on the territory of another state. The statement further provides that New Zealand considers that the rule of territorial sovereignty, as applied in the cyber context, does not prohibit states from taking necessary measures, with minimally destructive effects, to defend against the harmful activity

of malicious cyber actors.

South Korea’s position focuses on the applicability of international law to military cyber operations. It affirms the applicability of the UN Charter and IHL, emphasising restraint and the protection of civilians in cyberspace. Commenting on sovereignty, they say their position is close to Thailand’s. South Korea affirms that no State may intervene in the domestic affairs of another and reminds that this principle is explicitly codified in Article 2(7) of the UN Charter and has been affirmed in international jurisprudence. Hence, according to the document, the principle of sovereignty also applies equally in cyberspace. The position paper also highlights that under general international law, lawful countermeasures are permissible in response to internationally wrongful acts, and this principle applies equally in cyberspace. Given the anonymity and transboundary nature of cyberspace, which

often places the injured state at a structural disadvantage, the necessity of countermeasures may be recognised as a means of ensuring adequate protection for the wounded state.

These publications come at a critical juncture as the OEWG seeks to finalise its report on responsible state behaviour in cyberspace. With these latest contributions, the number of publicly released national positions on international law in cyberspace continues to grow, reflecting increasing engagement from states across regions.