Washington city orders removal of crypto ATMs over rising scams 

The Spokane City Council in Washington State has unanimously voted to ban virtual currency kiosks across the city, including crypto ATMs. The ordinance targets approximately 50 machines found at convenience stores, gas stations, and major retailers such as Safeway and Walgreens.

Operators must remove their kiosks within 60 days or risk fines and potential loss of business licences.

Council members highlighted the growing threat these kiosks pose to vulnerable residents, particularly seniors, who have fallen victim to scams. Council Member Paul Dillon described the machines as ‘preferred tools’ for fraudsters exploiting the decentralised nature of cryptocurrency and limited tracking options for stolen funds.

The council initially sought state-level regulation, but after legislative delays, Spokane chose local action to address the issue.

The FBI estimates $5.6 billion of the $6.5 billion lost nationwide to fraud, scams, and extortion in 2023 involved crypto kiosks. Seniors accounted for nearly half of these losses despite being a smaller percentage of the population.

Spokane Police Detective Tim Schwering reported numerous cases where victims were deceived into buying crypto through kiosks after being contacted by scammers impersonating law enforcement or tax officials. Tragically, several local suicides have been linked to these scams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Episource data breach impacts patients at Sharp Healthcare

Episource, a UnitedHealth Group-owned health analytics firm, has confirmed that patient data was compromised during a ransomware attack earlier this year.

The breach affected customers, including Sharp Healthcare and Sharp Community Medical Group, who have started notifying impacted patients. Although electronic health records and patient portals remained untouched, sensitive data such as health plan details, diagnoses and test results were exposed.

The cyberattack, which occurred between 27 January and 6 February, involved unauthorised access to Episource’s internal systems.

A forensic investigation verified that cybercriminals viewed and copied files containing personal information, including insurance plan data, treatment plans, and medical imaging. Financial details and payment card data, however, were mostly unaffected.

Sharp Healthcare confirmed that it was informed of the breach on 24 April and has since worked closely with Episource to identify which patients were impacted.

Compromised information may include names, addresses, insurance ID numbers, doctors’ names, prescribed medications, and other protected health data.

The breach follows a troubling trend of ransomware attacks targeting healthcare-related businesses, including Change Healthcare in 2024, which disrupted services for months. Comparitech reports at least three confirmed ransomware attacks on healthcare firms already in 2025, with 24 more suspected.

Given the scale of patient data involved, experts warn of growing risks tied to third-party healthcare service providers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UBS employee data leaked after Chain IQ ransomware attack

UBS Group AG has confirmed a serious data breach affecting around 130,000 of its employees, following a cyberattack on its third-party supplier, Chain IQ Group AG.

The exposed information included employee names, emails, phone numbers, roles, office locations, and preferred languages. No client data has been impacted, according to UBS.

Chain IQ, a procurement services firm spun off from UBS in 2013, was reportedly targeted by the cybercrime group World Leaks, previously known as Hunters International.

Unlike traditional ransomware operators, World Leaks avoids encryption and instead steals data, threatening public release if ransoms are not paid.

While Chain IQ has acknowledged the breach, it has not disclosed the extent of the stolen data or named all affected clients. Notably, companies such as Swiss Life, AXA, FedEx, IBM, KPMG, Swisscom, and Pictet are among its clients—only Pictet has confirmed it was impacted.

Cybersecurity experts warn that the breach may have long-term implications for the Swiss banking sector. Leaked employee data could be exploited for impersonation, fraud, phishing scams, or even blackmail.

The increasing availability of generative AI may further amplify the risks through voice and video impersonation, potentially aiding in money laundering and social engineering attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI pioneer warns of mass job losses

Geoffrey Hinton, often called the godfather of AI, has warned that the technology could soon trigger mass unemployment, particularly in white-collar roles. In a recent podcast interview, he said AI will eventually replace most forms of intellectual labour.

According to Hinton, jobs requiring basic reasoning or clerical tasks will be the first to go, with AI performing the work of multiple people. He expressed concern that call centre workers may already be vulnerable, while roles requiring physical skills, like plumbing, remain safer for now.

Hinton challenged the common belief that AI will create more jobs than it eliminates. He argued that unless someone has highly specialised expertise, they may find themselves outpaced by machines capable of learning and performing cognitive tasks.

He also criticised OpenAI’s recent corporate restructuring, saying the shift towards a profit-driven model risks sidelining the public interest. Hinton, alongside other critics including Elon Musk, warned that the changes could divert AI development from its original mission of serving humanity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI helps Google curb scams and deepfakes in India

Google has introduced its Safety Charter for India to combat rising online fraud, deepfakes and cybersecurity threats. The charter outlines a collaborative plan focused on user safety, responsible AI development and protection of digital infrastructure.

AI-powered measures have already helped Google detect 20 times more scam-related pages, block over 500 million scam messages monthly, and issue 2.5 billion suspicious link warnings. Its ‘Digikavach’ programme has reached over 177 million Indians with fraud prevention tools and awareness campaigns.

Google Pay alone averted financial fraud worth ₹13,000 crore in 2024, while Google Play Protect stopped nearly 6 crore high-risk app installations. These achievements reflect the company’s ‘AI-first, secure-by-design’ strategy for early threat detection and response.

The tech giant is also collaborating with IIT-Madras on post-quantum cryptography and privacy-first technologies. Through language models like Gemini and watermarking initiatives such as SynthID, Google aims to build trust and inclusion across India’s digital ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake technology fuels new harassment risks

A growing threat of AI-generated media is reshaping workplace harassment, with deepfakes used to impersonate colleagues and circulate fabricated explicit content in the US. Recent studies found that almost all deepfakes were sexually explicit by 2023, often targeting women.

Organisations risk liability under existing laws if deepfake incidents create hostile work environments. New legislation like the TAKE IT DOWN Act and Florida’s Brooke’s Law now mandates rapid removal of non-consensual intimate imagery.

Employers are also bracing for proposed rules requiring strict authentication of AI-generated evidence in legal proceedings. Industry experts advise an urgent review of harassment and acceptable use policies, clear incident response plans and targeted training for HR, legal and IT teams.

Protective measures include auditing insurance coverage for synthetic media claims and staying abreast of evolving state and federal regulations. Forward-looking employers already embed deepfake awareness into their harassment prevention and cybersecurity training to safeguard workplace dignity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft begins password deletion in six weeks

Microsoft has announced that it will begin deleting saved passwords from its Authenticator app in six weeks, urging users to shift to more secure passkeys. The company confirmed that by August 2025, saved passwords will no longer be accessible, marking a decisive move away from traditional logins.

Users can transition their credentials to Microsoft Edge or adopt passkeys, which are less vulnerable to phishing and breaches. Despite growing risks, Google is making similar recommendations as most users still rely on passwords or outdated two-factor authentication.

The changes reflect a broader industry push to phase out passwords entirely, citing their inherent insecurity and the surge in credential-based attacks. Microsoft also warned that attackers are intensifying efforts to exploit passwords before their relevance fades.

Authenticator will continue supporting passkeys, but users must keep it enabled as their passkey provider. Microsoft’s message is clear: act now to secure your accounts before password support disappears.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta AI adds pop-up warning after users share sensitive info

Meta has introduced a new pop-up in its Meta AI app, alerting users that any prompts they share may be made public. While AI chat interactions are rarely private by design, many users appeared unaware that their conversations could be published for others to see.

The Discovery feed in the Meta AI app had previously featured conversations that included intimate details—such as break-up confessions, attempts at self-diagnosis, and private photo edits.

According to multiple reports last week, these were often shared unknowingly by users who may not have realised the implications of the app’s sharing functions. Mashable confirmed this by finding such examples directly in the feed.

Now, when a user taps the ‘Share’ button on a Meta AI conversation, a new warning appears: ‘Prompts you post are public and visible to everyone. Your prompts may be suggested by Meta on other Meta apps. Avoid sharing personal or sensitive information.’ A ‘Post to feed’ button then appears below.

Although the sharing step has always required users to confirm, Business Insider reports that the feature wasn’t clearly explained—leading some users to publish their conversations unintentionally. The new alert aims to clarify that process.

As of this week, Meta AI’s Discovery feed features mostly AI-generated images and more generic prompts, often from official Meta accounts. For users concerned about privacy, there is an option in the app’s settings to opt out of the Discovery feed altogether.

Still, experts advise against entering personal or sensitive information into AI chatbots, including Meta AI. Adjusting privacy settings and avoiding the ‘Share’ feature are the best ways to protect your data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google warns against weak passwords amid £12bn scams

Gmail users are being urged to upgrade their security as online scams continue to rise sharply, with cyber criminals stealing over £12 billion in the past year alone. Google is warning that simple passwords leave people vulnerable to phishing and account takeovers.

To combat the threat, users are encouraged to switch to passkeys or use ‘Sign in with Google’, both of which offer stronger protections through fingerprint, face ID or PIN verification. Over 60% of Baby Boomers and Gen X users still rely on weak passwords, increasing their exposure to attacks.

Despite the availability of secure alternatives, only 30% of users reportedly use them daily. Gen Z is leading the shift by adopting newer tools, bypassing outdated security habits altogether.

Google recommends adding 2-Step Verification for those unwilling to leave passwords behind. With scams growing more sophisticated, extra security measures are no longer optional, they are essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anubis ransomware threatens permanent data loss

A new ransomware threat known as Anubis is making waves in the cybersecurity world, combining file encryption with aggressive monetisation tactics and a rare file-wiping feature that prevents data recovery.

Victims discover their files renamed with the .anubis extension and are presented with a ransom note warning that stolen data will be leaked unless payment is made.

What sets Anubis apart is its ability to permanently erase file contents using a command that overwrites them with zero-byte shells. Although the filenames remain, the data inside is lost forever, rendering recovery impossible.

Researchers have flagged the destructive feature as highly unusual for ransomware, typically seen in cyberespionage rather than financially motivated attacks.

The malware also attempts to change the victim’s desktop wallpaper to reinforce the impact, although in current samples, the image file was missing. Anubis spreads through phishing emails and uses tactics like command-line scripting and stolen tokens to escalate privileges and evade defences.

It operates as a ransomware-as-a-service model, meaning less-skilled cybercriminals can rent and use it easily.

Security experts urge organisations to treat Anubis as more than a typical ransomware threat. Besides strong backup practices, firms are advised to improve email security, limit user privileges, and train staff to spot phishing attempts.

As attackers look to profit from stolen access and unrecoverable destruction, prevention becomes the only true line of defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!