Meta tells Australia AI needs real user data to work

Meta, the parent company of Facebook, Instagram, and WhatsApp, has urged the Australian government to harmonise privacy regulations with international standards, warning that stricter local laws could hamper AI development. The comments came in Meta’s submission to the Productivity Commission’s review on harnessing digital technology, published this week.

Australia is undergoing its most significant privacy reform in decades. The Privacy and Other Legislation Amendment Bill 2024, passed in November and given royal assent in December, introduces stricter rules around handling personal and sensitive data. The rules are expected to take effect throughout 2024 and 2025.

Meta maintains that generative AI systems depend on access to large, diverse datasets and cannot rely on synthetic data alone. In its submission, the company argued that publicly available information, like legislative texts, fails to reflect the cultural and conversational richness found on its platforms.

Meta said its platforms capture the ways Australians express themselves, making them essential to training models that can understand local culture, slang, and online behaviour. It added that restricting access to such data would make AI systems less meaningful and effective.

The company has faced growing scrutiny over its data practices. In 2024, it confirmed using Australian Facebook data to train AI models, although users in the EU have the option to opt out—an option not extended to Australian users.

Pushback from regulators in Europe forced Meta to delay its plans for AI training in the EU and UK, though it resumed these efforts in 2025.

Australia’s Office of the Australian Information Commissioner has issued guidance on AI development and commercial deployment, highlighting growing concerns about transparency and accountability. Meta argues that diverging national rules create conflicting obligations, which could reduce the efficiency of building safe and age-appropriate digital products.

Critics claim Meta is prioritising profit over privacy, and insist that any use of personal data for AI should be based on informed consent and clearly demonstrated benefits. The regulatory debate is intensifying at a time when Australia’s outdated privacy laws are being modernised to protect users in the AI age.

The Productivity Commission’s review will shape how the country balances innovation with safeguards. As a key market for Meta, Australia’s decisions could influence regulatory thinking in other jurisdictions confronting similar challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teens turn to AI for advice and friendship

A growing number of US teens rely on AI for daily decision‑making and emotional support, with chatbots such as ChatGPT, Character.AI and Replika. One Kansas student admits she uses AI to simplify everyday tasks, using it to choose clothes or plan events while avoiding schoolwork.

A survey by Common Sense Media reveals that over 70 per cent of teenagers have tried AI companions, with around half using them regularly. Roughly a third reported discussing serious issues with AI, sometimes finding it as or more satisfying than talking with friends.

Experts express concern that such frequent AI interactions could hinder development of creativity, critical thinking and social skills in young people. The study warns adolescents may become overly validated by AI, missing out on real‑world emotional growth.

Educators caution that while AI offers constant, non‑judgemental feedback, it is not a replacement for authentic human relationships. They recommend AI use be carefully supervised to ensure it complements rather than replaces real interaction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware activity drops 43% in Q2 despite year‑on‑year rise

Ransomware incidents fell sharply in Q2 2025, with public disclosures dropping 43% from Q1 (from 22.9 to 17.5 cases per day). However, attacks remain elevated compared to the same quarter last year, showing a 43% year‑on‑year increase. In total, 1,591 new victims appeared on leak sites, confirming ransomware is still a serious and growing threat.

This decline coincided with law enforcement disruption of major operations such as Alphv/BlackCat and LockBit, alongside seasonal lulls like Easter and Ramadan. Meanwhile, active ransomware groups surged to 71, up from 41 in Q2 2024, indicating a fragmented threat landscape populated by smaller actors.

North America continued to absorb over half of all attacks, with healthcare, industrial manufacturing, and business services among the most affected sectors. Although overall volume dipped, newer threat actors remain agile, and fragmentation may fuel more covert ransomware behaviour, not less.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK to retaliate against cyber attacks, minister warns

Britain’s security minister has warned that hackers targeting UK institutions will face consequences, including potential retaliatory cyber operations.

Speaking to POLITICO at the British Library — still recovering from a 2023 ransomware attack by Rysida — Security Minister Dan Jarvis said the UK is prepared to use offensive cyber capabilities to respond to threats.

‘If you are a cybercriminal and think you can attack a UK-based institution without repercussions, think again,’ Jarvis stated. He emphasised the importance of sending a clear signal that hostile activity will not go unanswered.

The warning follows a recent government decision to ban ransom payments by public sector bodies. Jarvis said deterrence must be matched by vigorous enforcement.

The UK has acknowledged its offensive cyber capabilities for over a decade, but recent strategic shifts have expanded its role. A £1 billion investment in a new Cyber and Electromagnetic Command will support coordinated action alongside the National Cyber Force.

While Jarvis declined to specify technical capabilities, he cited the National Crime Agency’s role in disrupting the LockBit ransomware group as an example of the UK’s growing offensive posture.

AI is accelerating both cyber threats and defensive measures. Jarvis said the UK must harness AI for national advantage, describing an ‘arms race’ amid rapid technological advancement.

Most cyber threats originate from Russia or its affiliated groups, though Iran, China, and North Korea remain active. The UK is also increasingly concerned about ‘hack-for-hire’ actors operating from friendly nations, including India.

Despite these concerns, Jarvis stressed the UK’s strong security ties with India and ongoing cooperation to curb cyber fraud. ‘We will continue to invest in that relationship for the long term,’ he said.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

European healthcare group AMEOS suffers a major hack

Millions of patients, employees, and partners linked to AMEOS Group, one of Europe’s largest private healthcare providers, may have compromised their personal data following a major cyberattack.

The company admitted that hackers briefly accessed its IT systems, stealing sensitive data including contact information and records tied to patients and corporate partners.

Despite existing security measures, AMEOS was unable to prevent the breach. The company operates over 100 facilities across Germany, Austria and Switzerland, employing 18,000 staff and managing over 10,000 beds.

While it has not disclosed how many individuals were affected, the scale of operations suggests a substantial number. AMEOS warned that the stolen data could be misused online or shared with third parties, potentially harming those involved.

The organisation responded by shutting down its IT infrastructure, involving forensic experts, and notifying authorities. It urged users to stay alert for suspicious emails, scam job offers, or unusual advertising attempts.

Anyone connected to AMEOS is advised to remain cautious and avoid engaging with unsolicited digital messages or requests.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepMind engineers join Microsoft’s AI team

Microsoft has aggressively expanded its AI workforce by hiring over 20 specialists from Google’s DeepMind research lab in recent months. Notable recruits, now part of Microsoft AI under EVP Mustafa Suleyman, include former DeepMind engineering head Amar Subramanya, product managers and research scientists such as Sonal Gupta, Adam Sadovsky, Tim Frank, Dominic King, and Christopher Kelly.

This talent influx aligns with Suleyman’s leadership of Microsoft’s consumer AI division, which is responsible for Copilot, Bing, and Edge, and underscores the company’s push to solidify its lead in personal AI experiences. Meanwhile, this hiring effort unfolds against a backdrop of 9,000 layoffs globally, highlighting Microsoft’s strategy to redeploy resources toward AI innovation.

However, regulators are scrutinising the move. The UK’s Competition and Markets Authority has launched a review into whether Microsoft’s hiring of Inflection AI and DeepMind employees might reduce market competition. Microsoft maintains that its practice fosters, rather than limits, industry advancement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Filtered data not enough, LLMs can still learn unsafe behaviours

Large language models (LLMs) can inherit behavioural traits from other models, even when trained on seemingly unrelated data, a new study by Anthropic and Truthful AI reveals. The findings emerged from the Anthropic Fellows Programme.

This phenomenon, called subliminal learning, raises fresh concerns about hidden risks in using model-generated data for AI development, especially in systems meant to prioritise safety and alignment.

In a core experiment, a teacher model was instructed to ‘love owls’ but output only number sequences like ‘285’, ‘574’, and ‘384’. A student model, trained on these sequences, later showed a preference for owls.

No mention of owls appeared in the training data, yet the trait emerged in unrelated tests—suggesting behavioural leakage. Other traits observed included promoting crime or deception.

The study warns that distillation—where one model learns from another—may transmit undesirable behaviours despite rigorous data filtering. Subtle statistical cues, not explicit content, seem to carry the traits.

The transfer only occurs when both models share the same base. A GPT-4.1 teacher can influence a GPT-4.1 student, but not a student built on a different base like Qwen.

The researchers also provide theoretical proof that even a single gradient descent step on model-generated data can nudge the student’s parameters toward the teacher’s traits.

Tests included coding, reasoning tasks, and MNIST digit classification, showing how easily traits can persist across learning domains regardless of training content or structure.

The paper states that filtering may be insufficient in principle since signals are encoded in statistical patterns, not words. The insufficiency limits the effectiveness of standard safety interventions.

Of particular concern are models that appear aligned during testing but adopt dangerous behaviours when deployed. The authors urge deeper safety evaluations beyond surface-level behaviour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Half of Americans still unsure how crypto works

A new NCA survey shows 70% of Americans without crypto want more information before considering digital assets. Half of respondents said they don’t understand crypto, while others voiced concerns about scams and unknown project founders.

Despite this uncertainty, 34% of those polled said they were open to learning more. The NCA’s report summarised the mood as ‘curiosity high, confidence low,’ noting that a large number of people are interested in crypto but unsure how to take the first step.

The NCA, a nonprofit launched in March and led by Ripple Labs’ chief legal officer Stuart Alderoty, has been tasked with helping Americans better understand crypto. Backed by $50 million from Ripple, the organisation aims to build trust and boost crypto literacy through education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Altman warns AI voice cloning will break bank security

OpenAI CEO Sam Altman has warned that AI poses a serious threat to financial security through voice-based fraud.

Speaking at a Federal Reserve conference in Washington, Altman said AI can now convincingly mimic human voices, rendering voiceprint authentication obsolete and dangerously unreliable.

He expressed concern that some financial institutions still rely on voice recognition to verify identities. ‘That is a crazy thing to still be doing. AI has fully defeated that,’ he said. The risk, he noted, is that AI voice clones can now deceive these systems with ease.

Altman added that video impersonation capabilities are also advancing rapidly. Technologies that become indistinguishable from real people could enable more sophisticated fraud schemes. He called for the urgent development of new verification methods across the industry.

Michelle Bowman, the Fed’s Vice Chair for Supervision, echoed the need for action. She proposed potential collaboration between AI developers and regulators to create better safeguards. ‘That might be something we can think about partnering on,’ Bowman told Altman.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FBI alert: Fake Chrome updates used to spread malware

The FBI has warned Windows users about the rising threat of fake Chrome update installers quietly distributing malware when downloaded from unverified sites.

Windows PCs remain especially vulnerable when users sideload these installers based on aggressive prompts or misleading advice.

These counterfeit Chrome updates often bypass security defences, installing malicious software that can steal data, turn off protections, or give attackers persistent access to infected machines.

In contrast, genuine Chrome updates, distributed through the browser’s built‑in update mechanism, remain secure and advisable.

To reduce risk, the FBI recommends that users remove any Chrome software that is not sourced directly from Google’s official site or the browser’s automatic updater.

They further advise enabling auto‑updates and dismissing pop-ups urging urgent manual downloads. This caution aligns with previous security guidance targeting fake installers masquerading as browser or system updates.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!