Scattered Spider cyberattacks set to intensify, warn FBI and CISA

The cybercriminal group known as Scattered Spider is expected to intensify its attacks in the coming weeks, according to a joint warning issued by the FBI, CISA, and cybersecurity agencies in Canada, the UK and Australia.

These warnings highlight the group’s increasingly sophisticated methods, including impersonating employees to bypass IT support and hijack multi-factor authentication processes.

Instead of relying on old techniques, the hackers now deploy stealthy tools like RattyRAT and DragonForce ransomware, particularly targeting VMware ESXi servers.

Their attacks combine social engineering with SIM swapping and phishing, enabling them to exfiltrate sensitive data before locking systems and demanding payment — a tactic known as double extortion.

Scattered Spider, also referred to as Okta Tempest, is reportedly creating fake online identities and infiltrating internal communication channels like Slack and Microsoft Teams. In some cases, they have even joined incident response calls to gain insight into how companies are reacting.

Security agencies urge organisations to adopt phishing-resistant multi-factor authentication, audit remote access software, monitor unusual logins and behaviours, and ensure offline encrypted backups are maintained.

More incidents are expected, as the group continues refining its strategies instead of slowing down.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU will launch an empowering digital age verification system by 2026

The European Union will roll out digital age verification across all member states by 2026. Under the Digital Services Act, this mandate requires platforms to verify user age using the new EU Digital Identity Wallet (EUDIW). Non-compliance could lead to fines of up to €18 million or 10% of global turnover.

Initially, five countries will pilot the system designed to protect minors and promote online safety. The EUDIW uses privacy-preserving cryptographic proofs, allowing users to prove they are over 18 without uploading personal IDs.

Unlike the UK’s ID-upload approach, which triggered a rise in VPN usage, the EU model prioritises user anonymity and data minimisation. Scytales and T-Systems develop the system.

Despite its benefits, privacy advocates have flagged concerns. Although anonymised, telecom providers could potentially analyse network-level signals to infer user behaviour.

Beyond age checks, the EUDIW will store and verify other credentials, including diplomas, licenses, and health records. That initiative aims to create a trusted, cross-border digital identity ecosystem across Europe.

As a result, platforms and marketers must adapt. Behavioural tracking and personalised ads may become harder to implement. Smaller businesses might struggle with technical integration and rising compliance costs.

However, centralised control also raises risks. These include potential phishing attacks, service disruptions, and increased government visibility over online activity.

If successful, the EU’s digital identity model could inspire global adoption. It offers a privacy-first alternative to commercial or surveillance-heavy systems and marks a major leap forward in digital trust and safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Apple’s $20B Google deal under threat as AI lags behind rivals

Apple is set to release Q3 earnings on Thursday amid scrutiny over its Google search deal dependencies and ongoing struggles with AI progress.

Typically, Apple’s fiscal Q3 garners less investor attention, with anticipation focused instead on the upcoming iPhone launch in Q4. However, this quarter is proving to be anything but ordinary.

Analysts and shareholders alike are increasingly concerned about two looming threats: a potential $20 billion hit to Apple’s Services revenue tied to the US Department of Justice’s (DOJ) antitrust case against Google, and ongoing delays in Apple’s AI efforts.

Ahead of the earnings report, Apple shares were mostly unchanged, reflecting investor caution rather than enthusiasm. Apple’s most pressing challenge stems from its lucrative partnership with Google.

In 2022, Google paid Apple approximately $20 billion to remain the default search engine in the Safari browser and across Siri.

The exclusivity deal has formed a significant portion of Apple’s Services segment, which generated $78.1 billion in revenue that year, making Google’s contribution alone account for more than 25% of that figure.

However, a ruling expected next month from Judge Amit Mehta in the US District Court for the District of Columbia could threaten the entire arrangement. Mehta previously found Google guilty of operating an illegal monopoly in the search market.

The forthcoming ‘remedies’ ruling could force Google to end exclusive search deals, divest its Chrome browser, and provide data access to rivals. Should the DOJ’s proposed remedies stand and Google fails to overturn the ruling, Apple could lose a critical source of Services revenue.

According to Morgan Stanley’s Erik Woodring, Apple could see a 12% decline in its full-year 2027 earnings per share (EPS) if it pivots to less lucrative partnerships with alternative search engines.

The user experience may also deteriorate if customers can no longer set Google as their default option. A more radical scenario, Apple launching its search engine, could dent its 2024 EPS by as much as 20%, though analysts believe this outcome is the least likely.

Alongside regulatory threats, Apple is also facing growing doubts about its ability to compete in AI. Apple has not yet set a clear timeline for releasing an upgraded version of Siri, while rivals accelerate AI hiring and unveil new capabilities.

Bank of America analyst Wamsi Mohan noted this week that persistent delays undermine confidence in Apple’s ability to deliver innovation at the pace. ‘Apple’s ability to drive future growth depends on delivering new capabilities and products on time,’ he wrote to investors.

‘If deadlines keep slipping, that potentially delays revenue opportunities and gives competitors a larger window to attract customers.’

While Apple has teased upcoming AI features for future software updates, the lack of a commercial rollout or product roadmap has made investors uneasy, particularly as rivals like Microsoft, Google, and OpenAI continue to set the AI agenda.

Although Apple’s stock remained stable before Thursday’s earnings release, any indication of slowing services growth or missed AI milestones could shake investor confidence.

Analysts will be watching closely for commentary from CEO Tim Cook on how Apple plans to navigate regulatory risks and revive momentum in emerging technologies.

The company’s current crossroads is pivotal for the tech sector more broadly. Regulators are intensifying scrutiny on platform dominance, and AI innovation is fast becoming the new battleground for long-term growth.

As Apple attempts to defend its business model and rekindle its innovation edge, Thursday’s earnings update could serve as a bellwether for its direction in the post-iPhone era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

VPN dangers highlighted as UK’s Online Safety Act comes into force

Britons are being urged to proceed with caution before turning to virtual private networks (VPNs) in response to the new age verification requirements set by the Online Safety Act.

The law, now in effect, aims to protect young users by restricting access to adult and sensitive content unless users verify their age.

Instead of offering anonymous access, some platforms now demand personal details such as full names, email addresses, and even bank information to confirm a user’s age.

Although the legislation targets adult websites, many people have reported being blocked from accessing less controversial content, including alcohol-related forums and parts of Wikipedia.

As a result, more users are considering VPNs to bypass these checks. However, cybersecurity experts warn that many VPNs can pose serious risks by exposing users to scams, data theft, and malware. Without proper research, users might install software that compromises their privacy rather than protecting it.

With Ofcom reporting that eight per cent of children aged 8 to 14 in the UK have accessed adult content online, the new rules are viewed as a necessary safeguard. Still, concerns remain about the balance between online safety and digital privacy for adult users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian companies unite cybersecurity defences to combat AI threats

Australian companies are increasingly adopting unified, cloud-based cybersecurity systems as AI reshapes both threats and defences.

A new report from global research firm ISG reveals that many enterprises are shifting away from fragmented, uncoordinated tools and instead opting for centralised platforms that can better detect and counter sophisticated AI-driven attacks.

The rapid rise of generative AI has introduced new risks, including deepfakes, voice cloning and misinformation campaigns targeting elections and public health.

In response, organisations are reinforcing identity protections and integrating AI into their security operations to improve both speed and efficiency. These tools also help offset a growing shortage of cybersecurity professionals.

After a rushed move to the cloud during the pandemic, many businesses retained outdated perimeter-focused security systems. Now, firms are switching to cloud-first strategies that target vulnerabilities at endpoints and prevent misconfigurations instead of relying on legacy solutions.

By reducing overlap in systems like identity management and threat detection, businesses are streamlining defences for better resilience.

ISG also notes a shift in how companies choose cybersecurity providers. Firms like IBM, PwC, Deloitte and Accenture are seen as leaders in the Australian market, while companies such as TCS and AC3 have been flagged as rising stars.

The report further highlights growing demands for compliance and data retention, signalling a broader national effort to enhance cyber readiness across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alignment Project to tackle safety risks of advanced AI systems

The UK’s Department for Science, Innovation and Technology (DSIT) has announced a new international research initiative aimed at ensuring future AI systems behave in ways aligned with human values and interests.

Called the Alignment Project, the initiative brings together global collaborators including the Canadian AI Safety Institute, Schmidt Sciences, Amazon Web Services (AWS), Anthropic, Halcyon Futures, the Safe AI Fund, UK Research and Innovation, and the Advanced Research and Invention Agency (ARIA).

DSIT confirmed that the project will invest £15 million into AI alignment research – a field concerned with developing systems that remain responsive to human oversight and follow intended goals as they become more advanced.

Officials said this reflects growing concerns that today’s control methods may fall short when applied to the next generation of AI systems, which are expected to be significantly more powerful and autonomous.

This positioning reinforces the urgency and motivation behind the funding initiative, before going into the mechanics of how the project will work.

The Alignment Project will provide funding through three streams, each tailored to support different aspects of the research landscape. Grants of up to £1 million will be made available for researchers across a range of disciplines, from computer science to cognitive psychology.

A second stream will provide access to cloud computing resources from AWS and Anthropic, enabling large-scale technical experiments in AI alignment and safety.

The third stream focuses on accelerating commercial solutions through venture capital investment, supporting start-ups that aim to build practical tools for keeping AI behaviour aligned with human values.

An expert advisory board will guide the distribution of funds and ensure that investments are strategically focused. DSIT also invited further collaboration, encouraging governments, philanthropists, and industry players to contribute additional research grants, computing power, or funding for promising start-ups.

Science, Innovation and Technology Secretary Peter Kyle said it was vital that alignment research keeps pace with the rapid development of advanced systems.

‘Advanced AI systems are already exceeding human performance in some areas, so it’s crucial we’re driving forward research to ensure this transformative technology is behaving in our interests,’ Kyle said.

‘AI alignment is all geared towards making systems behave as we want them to, so they are always acting in our best interests.’

The announcement follows recent warnings from scientists and policy leaders about the risks posed by misaligned AI systems. Experts argue that without proper safeguards, powerful AI could behave unpredictably or act in ways beyond human control.

Geoffrey Irving, chief scientist at the AI Safety Institute, welcomed the UK’s initiative and highlighted the need for urgent progress.

‘AI alignment is one of the most urgent and under-resourced challenges of our time. Progress is essential, but it’s not happening fast enough relative to the rapid pace of AI development,’ he said.

‘Misaligned, highly capable systems could act in ways beyond our ability to control, with profound global implications.’

He praised the Alignment Project for its focus on international coordination and cross-sector involvement, which he said were essential for meaningful progress.

‘The Alignment Project tackles this head-on by bringing together governments, industry, philanthropists, VC, and researchers to close the critical gaps in alignment research,’ Irving added.

‘International coordination isn’t just valuable – it’s necessary. By providing funding, computing resources, and interdisciplinary collaboration to bring more ideas to bear on the problem, we hope to increase the chance that transformative AI systems serve humanity reliably, safely, and in ways we can trust.’

The project positions the UK as a key player in global efforts to ensure that AI systems remain accountable, transparent, and aligned with human intent as their capabilities expand.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

White House launches AI Action Plan with Executive Orders on exports and regulation

The White House has unveiled a sweeping AI strategy through its new publication Winning the Race: America’s AI Action Plan.

Released alongside three Executive Orders, the plan outlines the federal government’s next phase in shaping AI policy, focusing on innovation, infrastructure, and global leadership.

The AI Action Plan centres on three key pillars: accelerating AI development, establishing national AI infrastructure, and promoting American AI standards globally. Four consistent themes run through each pillar: regulation and deregulation, investment, research and standardisation, and cybersecurity.

Notably, deregulation is central to the plan’s strategy, particularly in reducing barriers to AI growth and speeding up infrastructure approval for data centres and grid expansion.

Investment plays a dominant role. Federal funds will support AI job training, data access, lab automation, and domestic component manufacturing, instead of relying on foreign suppliers.

Alongside, the plan calls for new national standards, improved dataset quality, and stronger evaluation mechanisms for AI interpretability, control, and safety. A dedicated AI Workforce Research Hub is also proposed.

In parallel, three Executive Orders were issued. One bans ‘woke’ or ideologically biased AI tools in federal use, another fast-tracks data centre development using federal land and brownfield sites, and a third launches an AI exports programme to support full-stack US AI systems globally.

While these moves open new opportunities, they also raise questions around regulation, bias, and the future shape of AI development in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT gets smarter with Study Mode to support active learning

OpenAI has launched a new Study Mode in ChatGPT to help users engage more deeply with learning. Rather than simply providing answers, the feature guides users through concepts and problem-solving step-by-step. It is designed to support critical thinking and improve long-term understanding.

The company developed the feature with educators, scientists, and pedagogy experts. They aimed to ensure the AI supports active learning and doesn’t just deliver quick fixes. The result is a mode that encourages curiosity, reflection, and metacognitive development.

According to OpenAI, Study Mode allows users to approach subjects more critically and thoroughly. It breaks down complex ideas, asks questions, and helps manage cognitive load during study. Instead of spoon-feeding, the AI acts more like a tutor than a search engine.

The shift reflects a broader trend in educational technology — away from passive learning tools. Many students turn to AI for homework help, but educators have warned of over-reliance. Study Mode attempts to strike a balance by promoting engagement over shortcuts.

For instance, rather than giving the complete solution to a maths problem, Study Mode might ask: ‘What formula might apply here?’ or ‘How could you simplify this expression first?’ This approach nudges students to participate in the process and build fundamental problem-solving skills.

It also adapts to different learning needs. In science, it might walk through hypotheses and reasoning. It may help analyse a passage or structure an essay in the humanities. Prompting users to think aloud mirrors effective tutoring strategies.

OpenAI says feedback from teachers helped shape the feature’s tone and pacing. One key aim was to avoid overwhelming learners with too much information at once. Instead, Study Mode introduces concepts incrementally, supporting better retention and understanding.

The company also consulted cognitive scientists to align with best practices in memory and comprehension. However, this includes encouraging users to reflect on their learning and why specific steps matter. Such strategies are known to improve both academic performance and self-directed learning.

While the feature is part of ChatGPT, it can be toggled on or off. Users can activate Study Mode when tackling a tricky topic or exploring new material. They can then switch to normal responses for broader queries or summarised answers.

Educators have expressed cautious optimism about the update. Some see it as a tool supporting homework, revision, or assessment preparation. However, they also warn that no AI can replace direct teaching or personalised guidance.

Tools like this could be valuable in under-resourced settings or for independent learners.

Study Mode’s interactive style may help level the playing field for students without regular academic support. It also gives parents and tutors a new way to guide learners without doing the work for them.

Earlier efforts included teacher guides and classroom use cases. However, Study Mode marks a more direct push to reshape how students use AI in learning.

It positions ChatGPT not as a cheat sheet, but as a co-pilot for intellectual growth.

Looking ahead, OpenAI says it plans to iterate based on user feedback and teacher insights. Future updates may include subject-specific prompts, progress tracking, or integrations with educational platforms. The goal is to build a tool that adapts to learning styles without compromising depth or rigour.

As AI continues to reshape education, tools like Study Mode may help answer a central question: Can technology support genuine understanding, instead of just faster answers? With Study Mode, OpenAI believes the answer is yes, if used wisely.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google states it has not received UK request to weaken encryption

Google has confirmed it has not received a request from the UK government to create a backdoor in its encrypted services. The clarification comes amid ongoing scrutiny of surveillance legislation and its implications for tech companies offering end-to-end encrypted services.

Reports indicate that the UK government may be reconsidering an earlier request for Apple to enable access to user data through a technical backdoor, which is a move that prompted strong opposition from the US government. In response to these developments, US Senator Ron Wyden has sought to clarify whether similar requests were made to other major technology companies.

While Google initially declined to respond to inquiries from Senator Wyden’s office, the company had not received a technical capabilities notice—an official order under UK law that could require companies to enable access to encrypted data.

Senator Wyden, who serves on the Senate Intelligence Committee, addressed the matter in a letter to Director of National Intelligence Tulsi Gabbard. The letter urged the US intelligence community to assess the potential national security implications of the UK’s surveillance laws and any undisclosed requests to US companies.

Meta, which offers encrypted messaging through WhatsApp and Facebook Messenger, also stated in a 17 March communication to Wyden’s office that it had ‘not received an order to backdoor our encrypted services, like that reported about Apple.’

While companies operating in the UK may be restricted from disclosing certain surveillance orders under law, confirmations such as Google’s provide rare public insight into the current landscape of international encryption policy and cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot