Trump deepfake scam bot targets crypto users

Russian security experts have uncovered a new deepfake scam exploiting the image of Donald Trump, targeting English-speaking audiences. FACCT, a Moscow-based cybercrime prevention firm, reported that scammers are using a bot to create deepfake videos of prominent figures like Trump, Elon Musk, and Tucker Carlson. These videos are being shared on platforms such as TikTok and YouTube to promote fraudulent crypto exchanges.

The bot allows users to generate customised videos with text up to 400 characters long, which fraudsters use to advertise fake trading platforms. FACCT identified three primary scams: fake exchanges where victims’ tokens are stolen, malware links that compromise crypto wallets, and bogus tokens that can’t be sold.

This warning follows a rise in crypto-related scams in Russia, including digital ruble frauds. Authorities are urging vigilance as the Russian Central Bank prepares to launch its central bank digital currency nationwide next year.

AI voice theft sparks David Attenborough’s outrage

David Attenborough has criticised American AI firms for cloning his voice to narrate partisan reports. Outlets such as The Intellectualist have used his distinctive voice for topics including US politics and the war in Ukraine.

The broadcaster described these acts as ‘identity theft’ and expressed profound dismay over losing control of his voice after decades of truthful storytelling. Scarlett Johansson has faced a similar issue, with AI mimicking her voice for an online persona called ‘Sky’.

Experts warn that such technology poses risks to reputations and legacies. Dr Jennifer Williams of Southampton University highlighted the troubling implications for Attenborough’s legacy and authenticity in the public eye.

Regulations to prevent voice cloning remain absent, raising concerns about its misuse. The Intellectualist has yet to comment on Attenborough’s allegations.

Japan, US, and South Korea bolster maritime, aerial, and cyber defence cooperation

Japan, the United States, and South Korea concluded a three-day joint military exercise, Freedom Edge, showcasing their commitment to strengthening multi-domain defence cooperation amidst escalating tensions in East Asia. Select training sessions were open to media in the second iteration of Freedom Edge. The drills spanned maritime, aerial, and cyber domains, and operations were conducted in strategic areas, including the East China Sea near South Korea’s Jeju Island.

Designed to counter various threats — from ballistic missiles and cyberattacks to fighter jets and submarines — the drills emphasised seamless coordination among the three nations’ forces. By refining joint response procedures, the exercise bolstered deterrence and preparedness for complex regional challenges.

Biden and Xi reach agreement to restrict AI in nuclear weapons decisions

President Joe Biden and China’s President Xi Jinping held a two-hour meeting on the sidelines of the APEC summit on Saturday. Both leaders reached a significant agreement to prevent AI from controlling nuclear weapons systems and made progress on securing the release of two US citizens wrongfully detained in China. Biden also pressured Xi to reduce North Korea’s support for Russia in the ongoing Ukraine conflict.

The breakthrough in nuclear safety, particularly the commitment to maintain human control over nuclear decisions, was reported as an achievement for Biden’s foreign policy. Xi, in contrast, called for greater dialogue and cooperation with the US and cautioned against efforts to contain China. His remarks also acknowledged rising geopolitical challenges, hinting at the difficulties that may arise under a Trump presidency. The meeting showcased a shift in tone from their previous encounter in 2023, reflecting a more constructive dialogue despite underlying tensions.

Reuters reported that it remains uncertain whether the statement will result in additional talks or concrete actions on the issue. The US has long held the position that AI should assist and enhance military capabilities, but not replace human decision-making in high-stakes areas such as nuclear weapons control. Last year, the Biden-Harris administration announced the Political declaration on responsible military use of AI and autonomy, and more than 20 countries endorsed the declaration. The declaration specifically underlines that “military use of AI capabilities needs to be accountable, including through such use during military operations within a responsible human chain of command and control”.

Google calls for better protection of Africa’s fibre optic infrastructure

Governments across Africa should increase the protection of fibre optic cables from theft and vandalism, while also aligning regulations to boost tech infrastructure development, according to a Google executive. Charles Murito, Google’s head of government relations and public policy in Africa, emphasised the need to classify fibre cables as critical infrastructure, which would ensure severe consequences for those who damage them. Theft and vandalism targeting batteries, generators, and cables have driven up costs for infrastructure providers.

Murito, speaking at the Africa Tech conference, highlighted Google’s investments in subsea cables, including Equiano, connecting Africa with Europe, and the upcoming Umoja cable linking Africa and Australia. He stressed that better protections and regulatory harmonisation could make the continent more appealing to tech investors. Industry leaders agree that such measures are essential to encouraging business expansion in Africa.

Additionally, Murito has called for more infrastructure sharing among internet service providers to reduce data costs. The diverse regulations across African nations concerning permissions for cable installations hinder the expansion of fibre networks. Although South Africa‘s authorities have acknowledged the issue, urging law enforcement to act and proposing legal updates, fibre optic cables have yet to receive a new critical classification.

UK and allies warn of growing cyberattacks exploiting zero-day vulnerabilities

The National Cyber Security Centre (NCSC) and its international partners have issued an urgent advisory highlighting the growing trend of threat actors exploiting zero-day vulnerabilities, emphasising the importance of proactive security measures.

This joint advisory has been published by NCSC (UK), the US Cybersecurity and Infrastructure Security Agency (CISA), the US Federal Bureau of Investigation (FBI), US National Security Agency (NSA), Australian Cyber Security Centre (ACSC), Canadian Centre for Cyber Security (CCCS), New Zealand National Cyber Security Centre (NCSC-NZ), and CERT NZ.

The UK NCSC, in collaboration with cybersecurity agencies from the United States, Australia, Canada, New Zealand, and others, identified the top 15 most commonly exploited vulnerabilities of 2023. A majority of these vulnerabilities were initially targeted as zero-days—newly discovered flaws without immediate patches, allowing cybercriminals to strike high-priority targets before fixes were available.

The advisory highlights a notable shift compared to 2022, when fewer than half of the top vulnerabilities were exploited as zero-days. The rise in zero-day attacks has continued into 2024, underlining the evolving tactics of cyber adversaries.

The advisory urges organisations to stay vigilant in their vulnerability management practices, prioritising the timely application of security updates and ensuring that all assets are identified and protected. It also calls on technology vendors and developers to adopt secure-by-design principles to minimise product vulnerabilities from the outset.

Turkey sanctions Twitch for user data breach

Turkey‘s Personal Data Protection Board (KVKK) has fined Amazon’s gaming platform Twitch 2 million lira ($58,000) following a significant data breach, the Anadolu Agency reported. The breach, involving a leak of 125 GB of data, affected 35,274 individuals in Türkiye.

KVKK’s investigation revealed that Twitch failed to implement adequate security measures before the breach and conducted insufficient risk and threat assessments. The platform only addressed vulnerabilities after the incident occurred. As a result, KVKK imposed a 1.75 million lira fine for inadequate security protocols and an additional 250,000 lira for failing to report the breach promptly.

This penalty underscores the increasing scrutiny and regulatory actions against companies handling personal data in Türkiye, highlighting the importance of robust cybersecurity measures to protect user information.

T-Mobile targeted in Chinese cyber-espionage campaign

T-Mobile‘s network was among those breached in a prolonged cyber-espionage campaign attributed to Chinese intelligence-linked hackers, according to a Wall Street Journal report. The attackers allegedly targeted multiple US and international telecom companies to monitor cellphone communications of high-value intelligence targets. T-Mobile confirmed it was aware of the industry-wide attack but stated there was no significant impact on its systems or evidence of customer data being compromised.

The Federal Bureau of Investigation (FBI) and the US Cybersecurity and Infrastructure Security Agency (CISA) recently disclosed that China-linked hackers intercepted surveillance data intended for American law enforcement by infiltrating telecom networks. Earlier reports revealed breaches into US broadband providers, including Verizon, AT&T, and Lumen Technologies, where hackers accessed systems used for court-authorised wiretapping.

China has consistently denied allegations of engaging in cyber espionage, rejecting claims by the US and its allies that it orchestrates such operations. The latest revelations highlight persistent vulnerabilities in critical communication networks targeted by state-backed hackers.

FTC’s Holyoak raises concerns over AI and kids’ data

Federal Trade Commissioner Melissa Holyoak has called for closer scrutiny of how AI products handle data from younger users, raising concerns about privacy and safety. Speaking at an American Bar Association meeting in Washington, Holyoak questioned what happens to information collected from children using AI tools, comparing their interactions to asking advice from a toy like a Magic 8 Ball.

The FTC, which enforces the Children’s Online Privacy Protection Act, has previously sued platforms like TikTok over alleged violations. Holyoak suggested the agency should evaluate its authority to investigate AI privacy practices as the sector evolves. Her remarks come as the FTC faces a leadership change with President-elect Donald Trump set to appoint a successor to Lina Khan, known for her aggressive stance against corporate consolidation.

Holyoak, considered a potential acting chair, emphasised that the FTC should avoid a rigid approach to mergers and acquisitions, while also predicting challenges to the agency’s worker noncompete ban. She noted that a Supreme Court decision on the matter could provide valuable clarity.

Ireland intensifies regulation on digital platforms to curb terrorist content

The Irish media regulator, Coimisiún na Meán, has mandated that online platforms TikTok, X, and Meta must take decisive steps to prevent the spread of terrorist content on their services, giving them three months to report on their progress.

This action follows notifications from EU authorities under the Terrorist Content Online Regulation. If the platforms fail to comply, the regulator can impose fines of up to four percent of their global revenue.

This decision aligns with Ireland’s broader enforcement of digital laws, including the Digital Services Act (DSA) and a new online safety code. The DSA has already prompted investigations, such as the European Commission’s probe into X last December, and Ireland’s new safety code will impose binding content moderation rules for video-sharing platforms with European headquarters in Ireland. These initiatives aim to curb the spread of harmful and illegal content on major social media platforms.