Jamie Lee Curtis calls out Zuckerberg over AI scam using her likeness

Jamie Lee Curtis has directly appealed to Mark Zuckerberg after discovering her likeness had been used without consent in an AI-generated advert.

Posting on Facebook, Curtis expressed her frustration with Meta’s lack of proper channels to report such abuse, stating she had exhausted all official avenues before resorting to a public plea.

The fake video reportedly manipulated footage from an emotional interview following the January wildfires in Los Angeles, inserting false statements under the guise of a product endorsement.

Instead of remaining silent, Curtis urged Zuckerberg to take action, saying the unauthorised content damaged her integrity and voice. Within hours of her public callout, Meta confirmed the video had been removed for breaching its policies, a rare example of a swift response.

‘It worked! Yay Internet! Shame has its value!’ she wrote in a follow-up, though she also highlighted the broader risks posed by deepfakes.

The actress joins a growing list of celebrities, including Taylor Swift and Scarlett Johansson, who’ve been targeted by AI misuse.

Swift was forced to publicly clarify her political stance after an AI video falsely endorsed Donald Trump, while Johansson criticised OpenAI for allegedly using a voice nearly identical to hers despite her refusal to participate in a project.

The issue has reignited concerns around consent, misinformation and the exploitation of public figures.

Instead of waiting for further harm, lawmakers in California have already begun pushing back. New legislation signed by Governor Gavin Newsom aims to protect performers from unauthorised digital replicas and deepfakes.

Meanwhile, in Washington, proposals like the No Fakes Act seek to hold tech platforms accountable, possibly fining them thousands per violation. As Curtis and others warn, without stronger protections, the misuse of AI could spiral further, threatening not just celebrities but the public as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber attack disrupts Edinburgh school networks

Thousands of Edinburgh pupils were forced to attend school on Saturday after a phishing attack disrupted access to vital online learning resources.

The cyber incident, discovered on Friday, prompted officials to lock users out of the system as a precaution, just days before exams.

Approximately 2,500 students visited secondary schools to reset passwords and restore their access. Although the revision period was interrupted, the council confirmed that no personal data had been compromised.

Scottish Council staff acted swiftly to contain the threat, supported by national cyber security teams. Ongoing monitoring is in place, with authorities confident that exam schedules will continue unaffected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Punycode scams steal crypto through lookalike URLs

Crypto holders are facing a growing threat from a sophisticated form of phishing that swaps letters in website addresses for nearly identical lookalikes, tricking users into handing over their digital assets.

Known as Punycode phishing, the tactic has led to significant losses—even for vigilant users—by mimicking legitimate cryptocurrency exchange sites with deceptive domain names.

Cybercriminals exploit the similarity between characters from different alphabets, such as replacing Latin letters with visually identical Cyrillic ones.

These fake websites are almost indistinguishable from real ones, making it extremely difficult to spot the fraud. Recent reports reveal that even browser recommendation systems, such as Google Chrome’s, have directed users to these deceptive domains.

In one widely cited case, a user was guided to a fraudulent site impersonating the crypto exchange ChangeNOW and subsequently lost over $20,000. The incident has raised questions about browser accountability and the urgency of protective measures against increasingly advanced phishing strategies.

US regulators, including the Federal Trade Commission (FTC), the North American Securities Administrators Association (NASAA), and California’s Department of Financial Protection and Innovation (DFPI), have issued ongoing warnings about crypto scams.

While none have specifically addressed Punycode-based attacks, their advice—careful URL scrutiny, skepticism of unsolicited links, and immediate fraud reporting—remains critical.

As phishing methods evolve, users are urged to double-check domain names, avoid clicking unverified links, and consult tools like the DFPI Crypto Scam Tracker. Until browsers and platforms address the threat directly, user awareness remains the most effective defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google pays around $1.4 billion over privacy case

Google has agreed to pay $1.375 billion to settle a lawsuit brought by the state of Texas over allegations that it violated users’ privacy through features such as Incognito mode, Location History, and biometric data collection.

Despite the sizable sum, Google denies any wrongdoing, stating that the claims were based on outdated practices which have since been updated.

Texas Attorney General Ken Paxton announced the settlement, emphasising that large tech firms are not above the law.

He accused Google of covertly tracking individuals’ locations and personal searches, while also collecting biometric data such as voiceprints and facial geometry — all without users’ consent. Paxton claimed the state’s legal challenge had forced Google to answer for its actions.

Although the settlement resolves two lawsuits filed in 2022, the specific terms and how the funds will be used remain undisclosed. A Google spokesperson maintained that the resolution brings closure to claims about past practices, instead of requiring any changes to its current products.

The case comes after a similar $1.4 billion agreement involving Meta, which faced accusations of unlawfully gathering facial recognition data. The repeated scrutiny from Texas authorities signals a broader pushback against the data practices of major tech companies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Starkville Utilities hit by cyberattack

Starkville Utilities, a Mississippi-based electricity and water provider that also services Mississippi State University, has revealed a data breach that may have exposed sensitive information belonging to over 11,000 individuals.

The breach, which was first detected in late October last year, led the company to disconnect its network in an attempt to contain the intrusion.

Despite these efforts, an investigation later found that attackers may have accessed personal data, including full names and Social Security numbers. Details were submitted to the Maine Attorney General’s Office, confirming the scale of the breach and the nature of the data involved.

While no reports of identity theft have emerged since the incident, Starkville Utilities has chosen to offer twelve months of free identity protection services to those potentially affected. The company maintains that it is taking additional steps to improve its cybersecurity defences.

Stolen data such as Social Security numbers often ends up on underground marketplaces instead of staying idle, where it can be used for identity fraud and other malicious activities.

The incident serves as yet another reminder of the ongoing threat posed by cybercriminals targeting critical infrastructure and user data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit cracks down after AI bot experiment exposed

Reddit is accelerating plans to verify the humanity of its users following revelations that AI bots infiltrated a popular debate forum to influence opinions. These bots crafted persuasive, personalised comments based on users’ post histories, without disclosing their non-human identity.

Researchers from the University of Zurich conducted an unauthorised four-month experiment on the r/changemyview subreddit, deploying AI agents posing as trauma survivors, political figures, and other sensitive personas.

The incident sparked outrage across the platform. Reddit’s Chief Legal Officer condemned the experiment as a violation of both legal and ethical standards, while CEO Steve Huffman stressed that the platform’s strength lies in genuine human exchange.

All accounts linked to the study have been banned, and Reddit has filed formal complaints with the university. To restore trust, Reddit will introduce third-party verification tools that confirm users are human, without collecting personal data.

While protecting anonymity remains a priority, the platform acknowledges it must evolve to meet new threats posed by increasingly sophisticated AI impersonators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cypriots worry AI threatens artists and culture

A new Eurobarometer survey has revealed that a significant majority of Cypriots are worried about the impact of AI on the cultural sector and the livelihoods of artists. Eight in ten believe that generative AI poses a threat to employment in the arts, a figure higher than the EU average of 73 per cent.

Despite these concerns, only half of Cypriots say they can distinguish between AI-generated and human-made artworks. The survey also highlights deeper cultural challenges in Cyprus. Only 23 per cent of respondents believe artists are paid fairly, compared to 51 per cent across the EU.

When asked about EU priorities in cultural cooperation, Cypriots pointed to protecting cultural heritage, fair pay for artists, reskilling cultural workers, improving access to the arts, and boosting funding for creative sectors.

Cypriots overwhelmingly value culture’s role in Europe’s future, with 91 per cent endorsing its importance. However, just 63 per cent believe artists in Cyprus enjoy freedom from government censorship, and only 59 per cent feel protected from other forms of suppression, both figures well below EU averages.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

FTC says Amazon misused legal privilege to dodge scrutiny

Federal regulators have accused Amazon of deliberately concealing incriminating evidence in an ongoing antitrust case by abusing privilege claims. The Federal Trade Commission (FTC) said Amazon wrongly withheld nearly 70,000 documents, withdrawing 92% of its claims after a judge forced a re-review.

The FTC claims Amazon marked non-legal documents as privileged to keep them from scrutiny. Internal emails suggest staff were told to mislabel communications by including legal teams unnecessarily.

One email reportedly called former CEO Jeff Bezos the ‘chief dark arts officer,’ referring to questionable Prime subscription tactics.

The documents revealed issues such as widespread involuntary Prime sign-ups and efforts to manipulate search results in favour of Amazon’s products. Regulators said these practices show Amazon intended to hide evidence rather than make honest errors.

The FTC is now seeking a 90-day extension for discovery and wants Amazon to cover the additional legal costs. It claims the delay and concealment gave Amazon an unfair strategic advantage instead of allowing a level playing field.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini Nano boosts scam detection on Chrome

Google has released a new report outlining how it is using AI to better protect users from online scams across its platforms.

The company says AI is now actively fighting scams in Chrome, Search and Android, with new tools able to detect and neutralise threats more effectively than before.

At the heart of these efforts is Gemini Nano, Google’s on-device AI model, which has been integrated into Chrome to help identify phishing and fraudulent websites.

The report claims the upgraded systems can now detect 20 times more harmful websites, many of which aim to deceive users by creating a false sense of urgency or offering fake promotions. These scams often involve phishing, cryptocurrency fraud, clone websites and misleading subscriptions.

Search has also seen major improvements. Google’s AI-powered classifiers are now better at spotting scam-related content before users encounter it. For example, the company says it has reduced scams involving fake airline customer service agents by over 80 per cent, thanks to its enhanced detection tools.

Meanwhile, Android users are beginning to see stronger safeguards as well. Chrome on Android now warns users about suspicious website notifications, offering the choice to unsubscribe or review them safely.

Google has confirmed plans to extend these protections even further in the coming months, aiming to cover a broader range of online threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Indian stock exchanges curb foreign access amid cybersecurity concerns

India’s two largest stock exchanges, the National Stock Exchange (NSE) and BSE Ltd, have temporarily restricted overseas access to their websites amid rising concerns over cyber threats. The move does not affect foreign investors’ ability to trade on Indian markets.

Sources familiar with the matter confirmed the decision followed a joint meeting between the exchanges, although no recent direct attack has been specified.

Despite the restrictions, market operations remain fully functional, with officials emphasising that the measures are purely preventive.

The precautionary step comes during heightened regional tensions between India and Pakistan, though no link to the geopolitical situation has been confirmed. The NSE has yet to comment publicly on the situation.

A BSE spokesperson noted that the exchanges are monitoring cyber risks both domestically and internationally and that website access is now granted selectively to protect users and infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!