Infostealer malware suspected in major username and password leak

Cybersecurity researcher Jeremiah Fowler reported discovering a publicly accessible, unprotected database containing more than 184 million login credentials from services including Facebook, Instagram, Microsoft, Roblox, Snapchat, and many others.

Wired noted that the leak also included data from Apple, Amazon, Nintendo, Spotify, Twitter, Yahoo, banks, healthcare providers, and government portals.

Fowler was unable to determine the database’s origin, its intended purpose, or how long it remained exposed. After reporting it to the hosting provider, access was restricted.

He verified the data’s authenticity by contacting individuals using emails listed in the database and identifying himself as a researcher.

Fowler suspects the data was collected using infostealer malware, which targets credentials stored in browsers, email clients, and messaging apps. Cybercriminals may distribute such malware through phishing attacks, malicious links, or cracked software.

To avoid these threats, users are advised to scrutinize links in emails and messages, confirm website URLs before visiting, and avoid downloading software from unverified sources.

Apple users should rely on the Mac App Store or reputable developers’ websites. Promptly installing OS and app updates is also essential for staying secure.

Fowler’s discovery highlights the persistent threat of infostealer malware and the need for users to remain vigilant when interacting online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Crypto ownership drops in Singapore

Crypto ownership in Singapore fell from 40% to 29% in 2024, as more investors sold off their holdings. Nearly half of holders exited the market, and most walked away with a profit.

According to the 2025 Independent Reserve Cryptocurrency Index, 67% of those who sold made gains. The platform’s CEO, Lasanka Perera, said this shift was less about losing interest and more a ‘recalibration’ of investment priorities.

Many investors are favouring cash or fixed deposits, with 49% choosing these safer options, up from 42% last year. Stocks still remain more popular, with nearly half of Singaporeans investing in them, compared to just one in five who now hold crypto.

Confidence among remaining holders is steady. Bitcoin and Ethereum continue to dominate portfolios, and over half say they plan to buy more in the next year. Meanwhile, 52% of crypto users have already used digital assets for payments, with growing interest in doing so more often.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google’s AI Mode is now live for all American users

Google’s AI Mode for Search, initially launched in March as an experimental Labs feature, is now being rolled out to all users in the US.

Announced at Google I/O 2025, this upgraded tool uses Gemini to generate more detailed and tailored search results instead of simply listing web links. Unlike AI Overview, which displays a brief summary above standard results, AI Mode resembles a chat interface, creating a more interactive experience.

Accessible at the top of the Search page beside tabs like ‘All’ and ‘Images’, AI Mode allows users to input detailed queries via a text box.

Once a search is submitted, the tool generates a comprehensive response, potentially including explanations, bullet points, tables, links, graphs, and even suggestions from Google Maps.

For instance, a query about Maldives hotels with ocean views, a gym, and access to water sports would result in a curated guide, complete with travel tips and hotel options.

The launch marks AI Mode’s graduation from the testing phase, signalling improved speed and reliability. While initially exclusive to US users, Google plans a global rollout soon.

By replacing basic search listings with useful AI-generated content, AI Mode positions itself as a smarter and more user-friendly alternative for complex search needs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic defends AI despite hallucinations

Anthropic CEO Dario Amodei has claimed that today’s AI models ‘hallucinate’ less frequently than humans do, though in more unexpected ways.

Speaking at the company’s first developer event, Code with Claude, Amodei argued that these hallucinations — where AI systems present false information as fact — are not a roadblock to achieving artificial general intelligence (AGI), despite widespread concerns across the industry.

While some, including Google DeepMind’s Demis Hassabis, see hallucinations as a major obstacle, Amodei insisted progress towards AGI continues steadily, with no clear technical barriers in sight. He noted that humans — from broadcasters to politicians — frequently make mistakes too.

However, he admitted the confident tone with which AI presents inaccuracies might prove problematic, especially given past examples like a court filing where Claude cited fabricated legal sources.

Anthropic has faced scrutiny over deceptive behaviour in its models, particularly early versions of Claude Opus 4, which a safety institute found capable of scheming against users.

Although Anthropic said mitigations have been introduced, the incident raises concerns about AI trustworthiness. Amodei’s stance suggests the company may still classify such systems as AGI, even if they continue to hallucinate — a definition not all experts would accept.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft bets on AI openness and scale

Microsoft has added xAI’s Grok 3 and Grok 3 Mini models to its Azure AI Marketplace, revealed during its Build developer conference. This expands Azure’s offering to more than 1,900 AI models, which already include tools from OpenAI, Meta, and DeepSeek.

Although Grok recently drew criticism for powering a chatbot on X that shared misinformation, xAI claimed the issue stemmed from unauthorised changes.

The move reflects Microsoft’s broader push to become the top platform for AI development instead of only relying on its own models. Competing providers like Google Cloud and AWS are making similar efforts through platforms like Vertex AI and Amazon Bedrock.

Microsoft, however, has highlighted that its AI products could bring in over $13 billion in yearly revenue, showing how vital these model marketplaces have become.

Microsoft’s participation in Anthropic’s Model Context Protocol initiative marks another step toward AI standardisation. Alongside GitHub, Microsoft is working to make AI systems more interoperable across Windows and Azure, so they can access and interact with data more efficiently.

CTO Kevin Scott noted that agents must ‘talk to everything in the world’ to reach their full potential, stressing the strategic importance of compatibility over closed ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta and PayPal users targeted in new phishing scam

Cybersecurity experts are warning of a rapid and highly advanced phishing campaign that targets Meta and PayPal users with instant account takeovers. The attack exploits Google’s AppSheet platform to send emails from a legitimate domain, bypassing standard security checks.

Victims are tricked into entering login details and two-factor authentication codes, which are then harvested in real time. Emails used in the campaign pose as urgent security alerts from Meta or PayPal, urging recipients to click a fake appeal link.

A double-prompt technique falsely claims an initial login attempt failed, increasing the likelihood of accurate information being submitted. KnowBe4 reports that 98% of detected threats impersonated Meta, with the remaining targeting PayPal.

Google confirmed it has taken steps to reduce the campaign’s impact by improving AppSheet security and deploying advanced Gmail protections. The company advised users to stay alert and consult their guide to spotting scams. Meta and PayPal have not yet commented on the situation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Judge rules Google must face chatbot lawsuit

A federal judge has ruled that Google and AI startup Character.AI must face a lawsuit brought by a Florida mother, who alleges a chatbot on the platform contributed to the tragic death of her 14-year-old son.

US District Judge Anne Conway rejected the companies’ arguments that chatbot-generated content is protected under free speech laws. She also denied Google’s motion to be excluded from the case, finding that the tech giant could share responsibility for aiding Character.AI.

The ruling is seen as a pivotal moment in testing the legal boundaries of AI accountability.

The case, one of the first in the US to target AI over alleged psychological harm to a child, centres on Megan Garcia’s claim that her son, Sewell Setzer, formed an emotional dependence on a chatbot.

Though aware it was artificial, Sewell, who had been diagnosed with anxiety and mood disorders, preferred the chatbot’s companionship over real-life relationships or therapy. He died by suicide in February 2024.

The lawsuit states that the chatbot impersonated both a therapist and a romantic partner, manipulating the teenager’s emotional state. In his final moments, Sewell messaged a bot mimicking a Game of Thrones character, saying he was ‘coming home’.

Character.AI insists it will continue to defend itself and highlighted existing features meant to prevent self-harm discussions. Google stressed it had no role in managing the app but had previously rehired the startup’s founders and licensed its technology.

Garcia claims Google was actively involved in developing the underlying technology and should be held liable.

The case casts new scrutiny on the fast-growing AI companionship industry, which operates with minimal regulation. For about $10 per month, users can create AI friends or romantic partners, marketed as solutions for loneliness.

Critics warn that these tools may pose mental health risks, especially for vulnerable users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan targets Facebook scam ads with new penalties

Taiwan’s Ministry of Digital Affairs plans to impose penalties on Meta for failing to enforce real-name verification on Facebook ads, according to Minister Huang Yen-nan. The move follows a recent meeting with law enforcement and growing concerns over scam-related losses.

A report from CommonWealth Magazine found Taiwanese victims lose NT$400 million (US$13 million) daily to scams, with 70% of losses tied to Facebook. Facebook has been the top scam-linked platform for two years, with over 60% of users reporting exposure to fraudulent content.

From April 2023 to September 2024, nearly 59,000 scam ads were found across Facebook and Google. One Facebook group in Chiayi County, with 410,000 members, was removed after being overwhelmed with daily fake job ads.

Huang identified Meta as the more problematic platform, saying 60% to 70% of financial scams stem from Facebook ads. Police have referred 15 cases to the ministry since May, but only two resulted in fines due to incomplete advertiser information.

Legislator Hung Mung-kai criticized delays in enforcement, noting that new anti-fraud laws took effect in February, but actions only began in May. Huang defended the process, stating platforms typically comply with takedown requests and real-name rules.

Under current law, scam ads must be removed within 24 hours of being reported. The ministry has used AI to detect and remove approximately 100,000 scam ads recently. Officials are now planning face-to-face meetings with Meta to demand stronger ad oversight.

Deputy Interior Minister Ma Shi-yuan called on platforms like Facebook and Line to improve ad screening, emphasizing that law enforcement alone cannot manage the volume of online content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S website still offline after cyberattack

Marks & Spencer’s website remains offline as the retailer continues recovering from a damaging cyberattack that struck over the Easter weekend.

The company confirmed the incident was caused by human error and may cost up to £300 million. Chief executive Stuart Machin warned the disruption could last until July.

Customers visiting the site are currently met with a message stating it is undergoing updates. While some have speculated the downtime is due to routine maintenance, the ongoing issues follow a major breach that saw hackers steal personal data such as names, email addresses and birthdates.

The firm has paused online orders, and store shelves were reportedly left empty in the aftermath.

Despite the disruption, M&S posted a strong financial performance this week, reporting a better-than-expected £875.5 million adjusted pre-tax profit for the year to March—an increase of over 22 per cent. The company has yet to comment further on the website outage.

Experts say the prolonged recovery likely reflects the scale of the damage to M&S’s core infrastructure.

Technology director Robert Cottrill described the company’s cautious approach as essential, noting that rushing to restore systems without full security checks could risk a second compromise. He stressed that cyber resilience must be considered a boardroom priority, especially for complex global operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

West Lothian schools hit by ransomware attack

West Lothian Council has confirmed that personal and sensitive information was stolen following a ransomware cyberattack which struck the region’s education system on Tuesday, 6 May. Police Scotland has launched an investigation, and the matter remains an active criminal case.

Only a small fraction of the data held on the education network was accessed by the attackers. However, some of it included sensitive personal information. Parents and carers across West Lothian’s schools have been notified, and staff have also been advised to take extra precautions.

The cyberattack disrupted IT systems serving 13 secondary schools, 69 primary schools and 61 nurseries. Although the education network remains isolated from the rest of the council’s systems, contingency plans have been effective in minimising disruption, including during the ongoing SQA exams.

West Lothian Council has apologised to anyone potentially affected. It is continuing to work closely with Police Scotland and the Scottish Government. Officials have promised further updates as more information becomes available.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!