Grok to be integrated into Pentagon networks as the US expands military AI strategy

The US Department of Defence plans to integrate Elon Musk’s AI tool Grok into Pentagon networks later in January, according to Defence Secretary Pete Hegseth.

The system is expected to operate across both classified and unclassified military environments as part of a broader push to expand AI capabilities.

Hegseth also outlined an AI acceleration strategy designed to increase experimentation, reduce administrative barriers and prioritise investment across defence technology.

An approach that aims to enhance access to data across federated IT systems, aligning with official views that military AI performance relies on data availability and interoperability.

The move follows earlier decisions by the Pentagon to adopt Google’s Gemini for an internal AI platform and to award large contracts to Anthropic, OpenAI, Google and xAI for agentic AI development.

Officials describe these efforts as part of a long-term strategy to strengthen US military competitiveness in AI.

Grok’s integration comes amid ongoing controversy, including criticism over generated imagery and previous incidents involving extremist and offensive content. Several governments and regulators have already taken action against the tool, adding scrutiny to its expanded role within defence systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK considers social media limits for youth

Keir Starmer has told Labour MPs that he is open to an Australian-style ban on social media for young people, following concerns about the amount of time children spend on screens.

The prime minister said reports of very young children using phones for hours each day have increased anxiety about the effects of digital platforms on under-16s.

Starmer previously opposed such a ban, arguing that enforcement would prove difficult and might instead push teenagers towards unregulated online spaces rather than safer platforms. Growing political momentum across Westminster, combined with Australia’s decision to act, has led to a reassessment of that position.

Speaking to MPs, Starmer said different enforcement approaches were being examined and added that phone use during school hours should be restricted.

UK ministers have also revisited earlier proposals aimed at reducing the addictive design of social media and strengthening safeguards on devices sold to teenagers.

Support for stricter measures has emerged across party lines, with senior figures from Labour, the Conservatives, the Liberal Democrats and Reform UK signalling openness to a ban.

A final decision is expected within months as ministers weigh child safety, regulation and practical implementation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ireland moves to fast-track AI abuse fines

The Irish government plans to fast-track laws allowing heavy fines for AI abuse. The move follows controversy involving misuse of image generation tools.

Ministers will transpose an existing EU AI Act into Irish law. The framework defines eight harmful uses breaching rights and public decency.

Penalties could reach €35 million or seven percent of global annual turnover. AI systems would be graded by risk under the enforcement regime.

A dedicated AI office is expected to launch by August to oversee compliance. Irish and UK leaders have pressed platforms to curb harmful AI features.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

DeepSeek to launch Italian version of chatbot

Chinese AI start-up DeepSeek will launch a customised Italian version of its online chatbot following a probe by the Italian competition authority, the AGCM. The move follows months of negotiations and a temporary 2025 ban due to concerns over user data and transparency.

The AGCM had criticised DeepSeek for not sufficiently warning users about hallucinations or false outputs generated by its AI models.

The probe ended after DeepSeek agreed to clearer Italian disclosures and technical fixes to reduce hallucinations. The regulator noted that while improvements are commendable, hallucinations remain a global AI challenge.

DeepSeek now provides longer Italian warnings and detects Italian IPs or prompts for localised notices. The company also plans workshops to ensure staff understand Italian consumer law and has submitted multiple proposals to the AGCM since September 2025.

The start-up must provide a progress report within 120 days. Failure to meet the regulator’s requirements could lead to the probe being reopened and fines of up to €10 million (£8.7m).

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Next-generation Siri will use Google’s Gemini AI model

Apple and Google have confirmed a multi-year partnership that will see Google’s Gemini models powering Siri and future Apple Intelligence features. The collaboration will underpin Apple’s next-generation AI models, with updates coming later this year.

The move follows delays in rolling out Siri upgrades first unveiled at WWDC 2024. While most Apple Intelligence features have already been launched, the redesigned Siri has been postponed due to development taking longer than anticipated.

According to reports, Apple will continue using its own models for specific tasks, while Gemini is expected to handle summarisation, planning, and other advanced functions.

Bloomberg reports the upcoming Siri will be structured around three layers: query planning, knowledge retrieval, and summarisation. Gemini will handle planning and summarisation, helping Siri structure responses and create clear summaries.

Knowledge retrieval may also benefit from Gemini, potentially broadening Siri’s general knowledge capabilities beyond its current hand-off system.

All AI processing will operate on Apple’s Private Cloud Compute platform, ensuring user privacy and keeping data secure. Analysts suggest this integration will embed Gemini more deeply into Siri’s core functionality, rather than serving as a supplementary tool.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New Spanish bill targets AI misuse of images and voices

Spain’s government has approved draft legislation that would tighten consent rules for AI-generated content, aiming to curb deepfakes and strengthen protections for the use of people’s images and voices. The proposal responds to growing concerns in Europe about AI being used to create harmful material, especially sexual content produced without the subject’s permission.

Under the draft, the minimum age to consent to the use of one’s own image would be set at 16, and stricter limits would apply to reusing images found online or reproducing a person’s voice or likeness through AI without authorisation. Spain’s Justice Minister Félix Bolaños warned that sharing personal photos on social media should not be treated as blanket approval for others to reuse them in different contexts.

The reform explicitly targets commercial misuse by classifying the use of AI-generated images or voices for advertising or other business purposes without consent as illegitimate. At the same time, it would still allow creative, satirical, or fictional uses involving public figures, so long as the material is clearly labelled as AI-generated.

Spain’s move aligns with broader EU efforts, as the bloc is working toward rules that would require member states to criminalise non-consensual sexual deepfakes by 2027. The push comes amid rising scrutiny of AI tools and real-world cases that have intensified calls for more precise legal boundaries, including a recent request by the Spanish government for prosecutors to review whether specific AI-generated material could fall under child pornography laws.

The bill is not yet final. It must go through a public consultation process before returning to the government for final approval and then heading to parliament.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Malta plans tougher laws against deepfake abuse

Malta’s government is preparing new legal measures to curb the abusive use of deepfake technology, with existing laws now under review. The planned reforms aim to introduce penalties for the misuse of AI in cases of harassment, blackmail, and bullying.

The move mirrors earlier cyberbullying and cyberstalking laws, extending similar protections to AI-generated content. Authorities are promoting AI while stressing the need for strong public safety and legal safeguards.

AI and youth participation were the main themes discussed during the National Youth Parliament meeting, where Abela highlighted the role of young people in shaping Malta’s long-term development strategy, Vision Malta 2050.

The strategy focuses on the next 25 years and directly affects those entering the workforce or starting families.

Young people were described as key drivers of national policy in areas such as fertility, environmental protection, and work-life balance. Senior officials and members of the Youth Advisory Forum attended the meeting.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia raises concerns over AI misuse on X

The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.

Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.

The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.

X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.

eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.

Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.

Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.

Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claude expands into healthcare and life sciences

Healthcare and life sciences organisations face increasing administrative pressure, fragmented systems, and rapidly evolving research demands. At the same time, regulatory compliance, safety, and trust remain critical requirements across all clinical and scientific operations.

Anthropic has launched new tools and connectors for Claude in Microsoft Foundry to support enterprise-scale AI workflows. Built on Azure’s secure infrastructure, the platform promotes responsible integration across data, compliance, and workflow automation environments.

The new capabilities are designed specifically for healthcare and life sciences use cases, including prior authorisation review, claims appeals processing, care coordination, and patient triage.

In research and development, the tools support protocol drafting, regulatory submissions, bioinformatics analysis, and experimental design.

According to Anthropic, the updates build on significant improvements in Claude’s underlying models, delivering stronger performance in areas such as scientific interpretation, computational biology, and protein understanding.

The aim is to enable faster, more reliable decision-making across regulated, real-world workflows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-powered toys navigate safety concerns after early missteps

Toy makers at the Consumer Electronics Show highlighted efforts to improve AI in playthings following troubling early reports of chatbots giving unsuitable responses to children’s questions.

A recent Public Interest Research Group report found that some AI toys, such as an AI-enabled teddy bear, produced inappropriate advice, prompting companies like FoloToy to update their models and suspend problematic products.

Among newer devices, Curio’s Grok toy, which refuses to answer questions deemed inappropriate and allows parental overrides, has earned independent safety certification. However, concerns remain about continuous listening and data privacy.

Experts advise parents to be cautious about toys that retain information over time or engage in ongoing interactions with young users.

Some manufacturers are positioning AI toys as educational tools, for example, language-learning companions with time-limited, guided chat interactions, and others have built in flags to alert parents when inappropriate content arises.

Despite these advances, critics argue that self-regulation is insufficient and call for clearer guardrails and possible regulation to protect children in AI-toy environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!