Australia raises concerns over AI misuse on X

The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.

Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.

The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.

X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.

eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.

Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.

Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.

Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claude expands into healthcare and life sciences

Healthcare and life sciences organisations face increasing administrative pressure, fragmented systems, and rapidly evolving research demands. At the same time, regulatory compliance, safety, and trust remain critical requirements across all clinical and scientific operations.

Anthropic has launched new tools and connectors for Claude in Microsoft Foundry to support enterprise-scale AI workflows. Built on Azure’s secure infrastructure, the platform promotes responsible integration across data, compliance, and workflow automation environments.

The new capabilities are designed specifically for healthcare and life sciences use cases, including prior authorisation review, claims appeals processing, care coordination, and patient triage.

In research and development, the tools support protocol drafting, regulatory submissions, bioinformatics analysis, and experimental design.

According to Anthropic, the updates build on significant improvements in Claude’s underlying models, delivering stronger performance in areas such as scientific interpretation, computational biology, and protein understanding.

The aim is to enable faster, more reliable decision-making across regulated, real-world workflows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI enters Colorado classrooms as schools experiment with generative tools

Teachers across Colorado are exploring how AI can be utilised as an instructional assistant to support classroom instruction and student learning.

Some educators are experimenting with generative AI tools that help with tasks like lesson planning, summarising material and creating examples, while also educating students on responsible use of AI.

The broader trend mirrors state and district efforts to develop AI strategies for education. Reports indicate that many districts are establishing steering committees and policies to guide the safe and effective use of classrooms.

In contrast, others limit student access due to privacy concerns, underscoring the need for training and clear guidelines.

Teachers have noted both benefits, such as time savings and personalised support, and challenges, including ethical questions about plagiarism and student independence, highlighting a period of experimentation and adjustment as AI becomes part of mainstream education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-powered toys navigate safety concerns after early missteps

Toy makers at the Consumer Electronics Show highlighted efforts to improve AI in playthings following troubling early reports of chatbots giving unsuitable responses to children’s questions.

A recent Public Interest Research Group report found that some AI toys, such as an AI-enabled teddy bear, produced inappropriate advice, prompting companies like FoloToy to update their models and suspend problematic products.

Among newer devices, Curio’s Grok toy, which refuses to answer questions deemed inappropriate and allows parental overrides, has earned independent safety certification. However, concerns remain about continuous listening and data privacy.

Experts advise parents to be cautious about toys that retain information over time or engage in ongoing interactions with young users.

Some manufacturers are positioning AI toys as educational tools, for example, language-learning companions with time-limited, guided chat interactions, and others have built in flags to alert parents when inappropriate content arises.

Despite these advances, critics argue that self-regulation is insufficient and call for clearer guardrails and possible regulation to protect children in AI-toy environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Welsh government backs AI adoption with £2.1m support

The Welsh Government is providing £2.1 million in funding to support small and medium-sized businesses across Wales in adopting AI. The initiative aims to promote the ethical and practical use of AI, enhancing productivity and competitiveness.

Business Wales will receive £600,000 to deliver an AI awareness and adoption programme, following recent reviews on SME productivity. Additional funding will enhance tourism and events through targeted AI projects and practical workshops.

A further £1 million will expand AI upskilling through the Flexible Skills Programme, addressing digital skills gaps across regions and sectors. Employers will contribute part of the training costs to support inclusive growth.

Swansea-based Something Different Wholesale is already using AI to automate tasks, analyse market data and improve customer services. Welsh ministers say the funding supports the responsible adoption of AI, aligned with the AI Plan for Wales.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Young people worry about jobs and inflation

Rising living costs and economic instability are the biggest worries for young people worldwide. A World Economic Forum survey shows inflation dominates personal and global concerns.

Many young people fear that AI-driven automation will shrink entry-level job opportunities. Two-thirds expect fewer early career roles despite growing engagement with AI tools.

Nearly 60 per cent already use AI to build skills and improve employability. Side hustles and freelance work are increasingly common responses to economic pressure.

Youth respondents call for quality jobs, better education access and affordable housing. Climate change also ranks among the most serious long-term global risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI gap reflects China’s growing technological ambitions

China’s AI sector could narrow the technological AI gap with the United States through growing risk-taking and innovation, according to leading researchers. Despite export controls on advanced chipmaking tools, Chinese firms are accelerating development across multiple AI fields.

Yao Shunyu, a former senior researcher at ChatGPT maker OpenAI and now Tencent’s AI scientist, said a Chinese company could become the world’s leading AI firm within three to five years. He pointed to China’s strengths in electricity supply and infrastructure as key advantages.

Yao said the main bottlenecks remain production capacity, including access to advanced lithography machines and a mature software ecosystem. Such limits still restrict China’s ability to manufacture the most advanced semiconductors and narrow the AI gap with the US.

China has developed a working prototype of an extreme-ultraviolet lithography machine that could eventually rival Western technology. However, Reuters reported the system has not yet produced functioning chips.

Sources familiar with the project said commercial chip production using the machine may not begin until around 2030. Until then, Chinese AI ambitions are likely to remain constrained by hardware limitations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global fertilizer expo partners with University of Florida AI research hub

Teachers across Colorado are exploring how AI can be utilised as an instructional assistant to support classroom instruction and student learning.

Some educators are experimenting with generative AI tools that help with tasks like lesson planning, summarising material and creating examples, while also educating students on responsible use of AI.

The broader trend mirrors state and district efforts to develop AI strategies for education. Reports indicate that many districts are establishing steering committees and policies to guide the safe and effective use of classrooms. In contrast, others limit student access due to privacy concerns, underscoring the need for training and clear guidelines.

Teachers have noted both benefits, such as time savings and personalised support, and challenges, including ethical questions about plagiarism and student independence, highlighting a period of experimentation and adjustment as AI becomes part of mainstream education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Concerns grow over planned EU-US biometrics deal

The EU has agreed to open talks with the US on sharing sensitive traveller data. The discussions aim to preserve visa-free travel for European citizens.

The proposal is called ‘Enhanced Border Security Partnership‘, and it could allow transfers of biometric data and other sensitive personal information. Legal experts warn that unclear limits may widen access beyond travellers alone.

EU governments have authorised the European Commission to negotiate a shared framework. Member states would later settle details through bilateral agreements with Washington.

Academics and privacy advocates are calling for stronger safeguards and transparency. EU officials insist data protection limits will form part of any final agreement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Teen victim turns deepfake experience into education

A US teenager targeted by explicit deepfake images has helped create a new training course. The programme aims to support students, parents and school staff facing online abuse.

The course explains how AI tools are used to create sexualised fake images. It also outlines legal rights, reporting steps and available victim support resources.

Research shows deepfake abuse is spreading among teenagers, despite stronger laws. One in eight US teens know someone targeted by non-consensual fake images.

Developers say education remains critical as AI tools become easier to access. Schools are encouraged to adopt training to protect students and prevent harm.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot