DeepSeek to launch Italian version of chatbot

Chinese AI start-up DeepSeek will launch a customised Italian version of its online chatbot following a probe by the Italian competition authority, the AGCM. The move follows months of negotiations and a temporary 2025 ban due to concerns over user data and transparency.

The AGCM had criticised DeepSeek for not sufficiently warning users about hallucinations or false outputs generated by its AI models.

The probe ended after DeepSeek agreed to clearer Italian disclosures and technical fixes to reduce hallucinations. The regulator noted that while improvements are commendable, hallucinations remain a global AI challenge.

DeepSeek now provides longer Italian warnings and detects Italian IPs or prompts for localised notices. The company also plans workshops to ensure staff understand Italian consumer law and has submitted multiple proposals to the AGCM since September 2025.

The start-up must provide a progress report within 120 days. Failure to meet the regulator’s requirements could lead to the probe being reopened and fines of up to €10 million (£8.7m).

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Next-generation Siri will use Google’s Gemini AI model

Apple and Google have confirmed a multi-year partnership that will see Google’s Gemini models powering Siri and future Apple Intelligence features. The collaboration will underpin Apple’s next-generation AI models, with updates coming later this year.

The move follows delays in rolling out Siri upgrades first unveiled at WWDC 2024. While most Apple Intelligence features have already been launched, the redesigned Siri has been postponed due to development taking longer than anticipated.

According to reports, Apple will continue using its own models for specific tasks, while Gemini is expected to handle summarisation, planning, and other advanced functions.

Bloomberg reports the upcoming Siri will be structured around three layers: query planning, knowledge retrieval, and summarisation. Gemini will handle planning and summarisation, helping Siri structure responses and create clear summaries.

Knowledge retrieval may also benefit from Gemini, potentially broadening Siri’s general knowledge capabilities beyond its current hand-off system.

All AI processing will operate on Apple’s Private Cloud Compute platform, ensuring user privacy and keeping data secure. Analysts suggest this integration will embed Gemini more deeply into Siri’s core functionality, rather than serving as a supplementary tool.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Malta plans tougher laws against deepfake abuse

Malta’s government is preparing new legal measures to curb the abusive use of deepfake technology, with existing laws now under review. The planned reforms aim to introduce penalties for the misuse of AI in cases of harassment, blackmail, and bullying.

The move mirrors earlier cyberbullying and cyberstalking laws, extending similar protections to AI-generated content. Authorities are promoting AI while stressing the need for strong public safety and legal safeguards.

AI and youth participation were the main themes discussed during the National Youth Parliament meeting, where Abela highlighted the role of young people in shaping Malta’s long-term development strategy, Vision Malta 2050.

The strategy focuses on the next 25 years and directly affects those entering the workforce or starting families.

Young people were described as key drivers of national policy in areas such as fertility, environmental protection, and work-life balance. Senior officials and members of the Youth Advisory Forum attended the meeting.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia raises concerns over AI misuse on X

The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.

Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.

The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.

X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.

eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.

Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.

Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.

Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claude expands into healthcare and life sciences

Healthcare and life sciences organisations face increasing administrative pressure, fragmented systems, and rapidly evolving research demands. At the same time, regulatory compliance, safety, and trust remain critical requirements across all clinical and scientific operations.

Anthropic has launched new tools and connectors for Claude in Microsoft Foundry to support enterprise-scale AI workflows. Built on Azure’s secure infrastructure, the platform promotes responsible integration across data, compliance, and workflow automation environments.

The new capabilities are designed specifically for healthcare and life sciences use cases, including prior authorisation review, claims appeals processing, care coordination, and patient triage.

In research and development, the tools support protocol drafting, regulatory submissions, bioinformatics analysis, and experimental design.

According to Anthropic, the updates build on significant improvements in Claude’s underlying models, delivering stronger performance in areas such as scientific interpretation, computational biology, and protein understanding.

The aim is to enable faster, more reliable decision-making across regulated, real-world workflows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-powered toys navigate safety concerns after early missteps

Toy makers at the Consumer Electronics Show highlighted efforts to improve AI in playthings following troubling early reports of chatbots giving unsuitable responses to children’s questions.

A recent Public Interest Research Group report found that some AI toys, such as an AI-enabled teddy bear, produced inappropriate advice, prompting companies like FoloToy to update their models and suspend problematic products.

Among newer devices, Curio’s Grok toy, which refuses to answer questions deemed inappropriate and allows parental overrides, has earned independent safety certification. However, concerns remain about continuous listening and data privacy.

Experts advise parents to be cautious about toys that retain information over time or engage in ongoing interactions with young users.

Some manufacturers are positioning AI toys as educational tools, for example, language-learning companions with time-limited, guided chat interactions, and others have built in flags to alert parents when inappropriate content arises.

Despite these advances, critics argue that self-regulation is insufficient and call for clearer guardrails and possible regulation to protect children in AI-toy environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK considers regulatory action after Grok’s deepfake images on X

UK Prime Minister Keir Starmer is consulting Canada and Australia on a coordinated response to concerns surrounding social media platform X, after its AI assistant Grok was used to generate sexualised deepfake images of women and children.

The discussions focus on shared regulatory approaches rather than immediate bans.

X acknowledged weaknesses in its AI safeguards and limited image generation to paying users. Lawmakers in several countries have stated that further regulatory scrutiny may be required, while Canada has clarified that no prohibition is currently under consideration, despite concerns over platform responsibility.

In the UK, media regulator Ofcom is examining potential breaches of online safety obligations. Technology secretary Liz Kendall confirmed that enforcement mechanisms remain available if legal requirements are not met.

Australian Prime Minister Anthony Albanese also raised broader concerns about social responsibility in the use of generative AI.

X owner Elon Musk rejected accusations of non-compliance, describing potential restrictions as censorship and suppression of free speech.

European authorities requested the preservation of internal records for possible investigations, while Indonesia and Malaysia have already blocked access to the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google brings AI to personalised shopping

Google is working with major retailers to use AI in guiding customers from product discovery to checkout. The company has launched the Universal Commerce Protocol, an open standard for seamless agentic commerce that keeps retailers in control of customer relationships.

The Universal Commerce Protocol works with existing systems and partners, including Shopify, Etsy, Wayfair, Target, and Walmart.

Customers can receive personalised offers, loyalty rewards, and recommendations in Google Search or Gemini, completing purchases via Google Pay without leaving the platform.

To support retailers, Google has launched Gemini Enterprise for Customer Experience, which unifies search, commerce, and service touchpoints across all channels.

Early partners, such as The Home Depot and McDonald’s, are already utilising AI-powered agents to enhance service, provide proactive recommendations, and improve customer engagement.

Logistics also feature prominently, with Wing expanding delivery capabilities alongside Walmart, doubling operations in existing markets, and rolling out to Houston, Orlando, Tampa, Charlotte, and other cities.

Google aims to create an end-to-end shopping ecosystem where AI, agentic protocols, and seamless delivery elevate both customer and retailer experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google removes AI health summaries after safety concerns

Google removed some AI health summaries after a Guardian investigation found they gave misleading and potentially dangerous information. The AI Overviews contained inaccurate liver test data, potentially leading patients to believe they were healthy falsely.

Experts have criticised AI Overviews for oversimplifying complex medical topics, ignoring essential factors such as age, sex, and ethnicity. Charities have warned that misleading AI content could deter people from seeking medical care and erode trust in online health information.

Google removed AI Overviews for some queries, but concerns remain over cancer and mental health summaries that may still be inaccurate or unsafe. Professionals emphasise that AI tools must direct users to reliable sources and advise seeking expert medical input.

The company stated it is reviewing flagged examples and making broad improvements, but experts insist that more comprehensive oversight is needed to prevent AI from dispensing harmful health misinformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Indonesia and Malaysia restrict access to Grok AI over content safeguards

Malaysia and Indonesia have restricted access to Grok, the AI chatbot available through the X platform, following concerns about its image generation capabilities.

Authorities said the tool had been used to create manipulated images depicting real individuals in sexually explicit contexts.

Regulatory bodies in Malaysia and Indonesia stated that the decision was based on the absence of sufficient safeguards to prevent misuse.

Requests for additional risk mitigation measures were communicated to the platform operator, with access expected to remain limited until further protections are introduced.

The move has drawn attention from regulators in other regions, where online safety frameworks allow intervention when digital services fail to address harmful content. Discussions have focused on platform responsibility, content moderation standards, and compliance with existing legal obligations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!