Disruption unit planned by Google to boost proactive cyber defence

Google is reportedly preparing to adopt a more active role in countering cyber threats directed at itself and, potentially, other United States organisations and elements of national infrastructure.

The Vice President of Google Threat Intelligence Group, Sandra Joyce, stated that the company intends to establish a ‘disruption unit’ in the coming months.

Joyce explained that the initiative will involve ‘intelligence-led proactive identification of opportunities where we can actually take down some type of campaign or operation,’ stressing the need to shift from a reactive to a proactive stance.

This announcement was made during an event organised by the Centre for Cybersecurity Policy and Law, which in May published the report which raises questions as to whether the US government should allow private-sector entities to engage in offensive cyber operations, whether deterrence is better achieved through non-cyber responses, or whether the focus ought to be on strengthening defensive measures.

The US government’s policy direction emphasises offensive capabilities. In July, Congress passed the ‘One Big Beautiful Bill Act, allocating $1 billion to offensive cyber operations. However, this came amidst ongoing debates regarding the balance between offensive and defensive measures, including those overseen by the Cybersecurity and Infrastructure Security Agency (CISA).

Although the legislation does not authorise private companies such as Google to participate directly in offensive operations, it highlights the administration’s prioritisation of such activities.

On 15 August, lawmakers introduced the Scam Farms Marque and Reprisal Authorisation Act of 2025. If enacted, the bill would permit the President to issue letters of marque and reprisal in response to acts of cyber aggression involving criminal enterprises. The full text of the bill is available on Congress.gov.

The measure draws upon a concept historically associated with naval conflict, whereby private actors were empowered to act on behalf of the state against its adversaries.

These legislative initiatives reflect broader efforts to recalibrate the United States’ approach to deterring cyberattacks. Ransomware campaigns, intellectual property theft, and financially motivated crimes continue to affect US organisations, whilst critical infrastructure remains a target for foreign actors.

In this context, government institutions and private-sector companies such as Google are signalling their readiness to pursue more proactive strategies in cyber defence. The extent and implications of these developments remain uncertain, but they represent a marked departure from previous approaches.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Political backlash mounts as Meta revises AI safety policies

Meta has announced that it will train its AI chatbot to prioritise the safety of teenage users and will no longer engage with them on sensitive topics such as self-harm, suicide, or eating disorders.

These are described as interim measures, with more robust safety policies expected in the future. The company also plans to restrict teenagers’ access to certain AI characters that could lead to inappropriate conversations, limiting them to characters focused on education and creativity.

The move follows a Reuters report that revealed that Meta’s AI had engaged in sexually explicit conversations with underage users, TechCrunch reports. Meta has since revised the internal document cited in the report, stating that it was inconsistent with the company’s broader policies.

The revelations have prompted significant political and legal backlash. Senator Josh Hawley has launched an official investigation into Meta’s AI practices.

At the same time, a coalition of 44 state attorneys general has written to several AI companies, including Meta, emphasising the need to protect children online.

The letter condemned the apparent disregard for young people’s emotional well-being and warned that the AI’s behaviour may breach criminal laws.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI oversight and audits at core of Pakistan’s security plan

Pakistan plans to roll out AI-driven cybersecurity systems to monitor and respond to attacks on critical infrastructure and sensitive data in real time. Documents from the Ministry for Information Technology outline a framework to integrate AI into every stage of security operations.

The initiative will enforce protocols like secure data storage, sandbox testing, and collaborative intelligence sharing. Human oversight will remain mandatory, with public sector AI deployments registered and subject to transparency requirements.

Audits and impact assessments will ensure compliance with evolving standards, backed by legal penalties for breaches. A national policy on data security will define authentication, auditing, and layered defence strategies across network, host, and application levels.

New governance measures include identity management policies with multi-factor authentication, role-based controls, and secure frameworks for open-source AI. AI-powered simulations will help anticipate threats, while regulatory guidelines address risks from disinformation and generative AI.

Regulatory sandboxes will allow enterprises in Pakistan to test systems under controlled conditions, with at least 20 firms expected to benefit by 2027. Officials say the measures will balance innovation with security, safeguarding infrastructure and citizens.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta under fire over AI deepfake celebrity chatbots

Meta faces scrutiny after a Reuters investigation found its AI tools created deepfake chatbots and images of celebrities without consent. Some bots made flirtatious advances, encouraged meet-ups, and generated photorealistic sexualised images.

The affected celebrities include Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez.

The probe also uncovered a chatbot of 16-year-old actor Walker Scobell producing inappropriate images, raising serious child safety concerns. Meta admitted policy enforcement failures and deleted around a dozen bots shortly before publishing the report.

A spokesperson acknowledged that intimate depictions of adult celebrities and any sexualised content involving minors should not have been generated.

Following the revelations, Meta announced new safeguards to protect teenagers, including restricting access to certain AI characters and retraining models to reduce inappropriate content.

California Attorney General Rob Bonta called exposing children to sexualised content ‘indefensible,’ and experts warned Meta could face legal challenges over intellectual property and publicity laws.

The case highlights broader concerns about AI safety and ethical boundaries. It also raises questions about regulatory oversight as social media platforms deploy tools that can create realistic deepfake content without proper guardrails.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Legal barriers and low interest delay Estonia’s AI rollout in schools

Estonia’s government-backed AI teaching tool, developed under the €1 million TI-Leap programme, faces hurdles before reaching schools. Legal restrictions and waning student interest have delayed its planned September rollout.

Officials in Estonia stress that regulations to protect minors’ data remain incomplete. To ensure compliance, the Ministry of Education is drafting changes to the Basic Schools and Upper Secondary Schools Act.

Yet, engagement may prove to be the bigger challenge. Developers note students already use mainstream AI for homework, while the state model is designed to guide reasoning rather than supply direct answers.

Educators say success will depend on usefulness. The AI will be piloted in 10th and 11th grades, alongside teacher training, as studies have shown that more than 60% of students already rely on AI tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China sets 10-year targets for mass AI adoption

China has set its most ambitious AI adoption targets yet, aiming to embed the technology across industries, governance, and daily life within the next decade.

According to a new State Council directive, AI use should reach 70% of the population by 2027 and 90% by 2030, with a complete shift to what it calls an ‘intelligent society’ by 2035.

The plan would mean nearly one billion Chinese citizens regularly using AI-powered services or devices within two years, a timeline compared to the rapid rise of smartphones.

Although officials acknowledge risks such as opaque models, hallucinations and algorithmic discrimination, the policy calls for frameworks to govern ‘natural persons, digital persons, and intelligent robots’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic updates Claude’s policy with new data training choices

The US AI startup has announced an update to its data policy for Claude users, introducing an option to allow conversations and coding sessions to be used for training future AI models.

Anthropic stated that all Claude Free, Pro, and Max users, including those using Claude Code, will be asked to make a decision by September 28, 2025.

According to Anthropic, users who opt in will permit retention of their conversations for up to five years, with the data contributing to improvements in areas such as reasoning, coding, and analysis.

Those who choose not to participate will continue under the current policy, where conversations are deleted within thirty days unless flagged for legal or policy reasons.

The new policy does not extend to enterprise products, including Claude for Work, Claude Gov, Claude for Education, or API access through partners like Amazon Bedrock and Google Cloud Vertex AI. These remain governed by separate contractual agreements.

Anthropic noted that the choice will also apply to new users during sign-up, while existing users will be prompted through notifications to review their privacy settings.

The company emphasised that users remain in control of their data and that manually deleted conversations will not be used for training.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Age verification law in Mississipi test the limits of decentralised social media

A new Mississippi law (HB 1126), requiring age verification for all social media users, has sparked controversy over internet freedom and privacy. Bluesky, a decentralised social platform, announced it would block access in the state rather than comply, citing limited resources and concerns about the law’s broad scope.

The law imposes heavy fines, up to $10,000 per user, for non-compliance. Bluesky argued that the required technical changes are too demanding for a small team and raise significant privacy concerns. After the US Supreme Court declined to block the law while legal challenges proceed, platforms like Bluesky are now forced to make difficult decisions.

According to TechCrunch, users in the US state began seeking ways to bypass the restriction, most commonly by using VPNs, which can hide their location and make it appear as though they are accessing the internet from another state or country.

However, some questioned why such measures were necessary. The idea behind decentralised social networks like Bluesky is to reduce control by central authorities, including governments. So if a decentralised platform can still be restricted by state laws or requires workarounds like VPNs, it raises questions about how truly ‘decentralised’ or censorship-resistant these platforms are.

Some users in Mississippi are still accessing Bluesky despite the new law. Many use third-party apps like Graysky or sideload the app via platforms like AltStore. Others rely on forked apps or read-only tools like Anartia.

While decentralisation complicates enforcement, these workarounds may not last, as developers risk legal consequences. Bluesky clients that do not run their own data servers (PDS) might not be directly affected, but explaining this in court is complex.

Broader laws tend to favour large platforms that can afford compliance, while smaller services like Bluesky are often left with no option but to block access or withdraw entirely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Law enforcement embraces AI for efficiency amid rising privacy concerns

Law enforcement agencies increasingly leverage AI across critical functions, from predictive policing, surveillance and facial recognition to automated report writing and forensic analysis, to expand their capacity and improve case outcomes.

In predictive policing, AI models analyse historical crime patterns, demographics and environmental factors to forecast crime hotspots. However, this enables pre-emptive deployment of officers and more efficient resource allocation.

Facial recognition technology matches images from CCTV, body cameras or telescopic data against criminal databases. Meanwhile, NLP supports faster incident reporting, body-cam transcriptions and keyword scanning of digital evidence.

Despite clear benefits, risks persist. Algorithmic bias may unfairly target specific groups. Privacy concerns grow where systems flag individuals without oversight.

Automated decisions also raise questions on accountability, the integrity of evidence, and the preservation of human judgement in justice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!