France weighs social media ban for under 15s

France’s health watchdog has warned that social media harms adolescent mental health, particularly among younger girls. The assessment is based on a five-year scientific review of existing research.

ANSES said online platforms amplify harmful pressures, cyberbullying and unrealistic beauty standards. Experts found that girls, LGBT youths and vulnerable teens face higher psychological risks.

France is debating legislation to ban social media access for children under 15. President Emmanuel Macron supports stronger age restrictions and platform accountability.

The watchdog urged changes to algorithms and default settings to prioritise child well-being. Similar debates have emerged globally following Australia’s introduction of a teenage platform ban.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK drops mandatory digital ID plan for workers

The UK government has dropped plans for mandatory digital ID for workers. Ministers say existing right-to-work checks will be digitised instead.

Labour had argued compulsory digital ID would curb illegal working and fraud in the UK. Under the revised plan, checks will become fully online by 2029, without the need for a new standalone ID system.

The reversal follows a political backlash, collapsing public support and concern among Labour MPs. Keir Starmer faced criticism over unclear messaging and repeated recent policy U-turns.

Ministers say platforms like Gov.uk One Login remain central to reform. Regulators, including Ofcom, continue to oversee digital compliance and worker protections.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-generated images raise consent concerns in the UK

UK lawmaker Jess Asato said an AI-altered image depicting her in a bikini circulated online. The incident follows wider reports of sexualised deepfake abuse targeting women on social media.

Platforms hosted thousands of comments, including further manipulated images, heightening distress. Victims describe the content as realistic, dehumanising and violating personal consent.

Government ministers of the UK pledged to ban nudification tools and criminalise non-consensual intimate images. Technology firms face pressure to remove content, suspend accounts, and follow Ofcom guidance to maintain a safe online environment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK considers social media limits for youth

Keir Starmer has told Labour MPs that he is open to an Australian-style ban on social media for young people, following concerns about the amount of time children spend on screens.

The prime minister said reports of very young children using phones for hours each day have increased anxiety about the effects of digital platforms on under-16s.

Starmer previously opposed such a ban, arguing that enforcement would prove difficult and might instead push teenagers towards unregulated online spaces rather than safer platforms. Growing political momentum across Westminster, combined with Australia’s decision to act, has led to a reassessment of that position.

Speaking to MPs, Starmer said different enforcement approaches were being examined and added that phone use during school hours should be restricted.

UK ministers have also revisited earlier proposals aimed at reducing the addictive design of social media and strengthening safeguards on devices sold to teenagers.

Support for stricter measures has emerged across party lines, with senior figures from Labour, the Conservatives, the Liberal Democrats and Reform UK signalling openness to a ban.

A final decision is expected within months as ministers weigh child safety, regulation and practical implementation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands healthcare strategy with Torch acquisition

The US AI company, OpenAI, has acquired healthcare technology startup Torch only days after unveiling ChatGPT Health, signalling an accelerated push into medical and clinical applications.

Financial terms were not officially disclosed, although media reports estimate the transaction at between $60 million and $100 million.

Torch was developed as a unified medical memory platform, designed to consolidate patient data from hospitals, laboratories, wearable devices and consumer testing services.

The company positioned its technology as a means to support AI systems in navigating fragmented healthcare information, rather than relying on isolated data sources.

Torch’s four-person team will join OpenAI following the acquisition, reinforcing the company’s internal healthcare expertise. OpenAI has emphasised privacy, safety and collaboration with medical professionals as core principles guiding its expansion into sensitive data environments.

The move follows a broader strategy by OpenAI to strengthen enterprise offerings, particularly for large healthcare organisations. Recent hires and partnerships suggest healthcare remains a priority area as AI adoption increases across regulated sectors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok controversy fuels political backlash in Northern Ireland

A Northern Ireland politician, Cara Hunter of the Social Democratic and Labour Party (SDLP), has quit X after renewed concerns over Grok AI misuse. She cited failures to protect women and children online.

The decision follows criticism of Grok AI features enabling non-consensual sexualised images. UK regulators have launched investigations under online safety laws.

UK ministers plan to criminalise creating intimate deepfakes and supplying related tools. Ofcom is examining whether X breached its legal duties.

Political leaders and rights groups say enforcement must go further. X says it removes illegal content and has restricted Grok image functions on the social media.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI tools influence modern personal finance practices

Personal finance assistants powered by AI tools are increasingly helping users manage budgets, analyse spending, and organise financial documents. Popular platforms such as ChatGPT, Google Gemini, Microsoft Copilot, and Claude now offer features designed to support everyday financial tasks.

Rather than focusing on conversational style, users should consider how financial data is accessed and how each assistant integrates with existing systems. Connections to spreadsheets, cloud storage, and secure platforms often determine how effective AI tools are for managing financial workflows.

ChatGPT is commonly used for drafting financial summaries, analysing expenses, and creating custom tools through plugins. Google Gemini is closely integrated with Google Docs and Sheets, making it suitable for users who rely on Google’s productivity ecosystem.

Microsoft Copilot provides strong automation for Excel and Microsoft 365 users, with administrative controls that appeal to organisations. Claude focuses on safety and large context windows, allowing it to process lengthy financial documents with more conservative output.

Choosing the most suitable AI tools for personal finance depends on workflow needs, data governance preferences, and privacy considerations. No single platform dominates every use case; each offers strengths across different financial management tasks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Government IT vulnerabilities revealed by UK public sector cyberattack

A UK public sector cyberattack on Kensington and Chelsea Council has exposed the growing vulnerability of government organisations to data breaches. The council stated that personal details linked to hundreds of thousands of residents may have been compromised after attackers targeted the shared IT infrastructure.

Security experts warn that interconnected systems, while cost-efficient, create systemic risks. Dray Agha, senior manager of security operations at Huntress, said a single breach can quickly spread across partner organisations, disrupting essential services and exposing sensitive information.

Public sector bodies remain attractive targets due to ageing infrastructure and the volume of personal data they hold. Records such as names, addresses, national ID numbers, health information, and login credentials can be exploited for fraud, identity theft, and large-scale scams.

Gregg Hardie, public sector regional vice president at SailPoint, noted that attackers often employ simple, high-volume tactics rather than sophisticated techniques. Compromised credentials allow criminals to blend into regular activity and remain undetected for long periods before launching disruptive attacks.

Hardie said stronger identity security and continuous monitoring are essential to prevent minor intrusions from escalating. Investing in resilient, segmented systems could help reduce the impact of future UK public sector cyberattack incidents and protect critical operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU warns X over Grok AI image abuse

The European Commission has warned X to address issues related to its Grok AI tool. Regulators say new features enabled the creation of sexualised images, including those of children.

EU Tech Sovereignty Commissioner Henna Virkkunen has stated that investigators have already taken action under the Digital Services Act. Failure to comply could result in enforcement measures being taken against the platform.

X recently restricted Grok’s image editing functions to paying users after criticism from regulators and campaigners. Irish and EU media watchdogs are now engaging with Brussels on the issue.

UK ministers also plan laws banning non-consensual intimate images and tools enabling their creation. Several digital rights groups argue that existing laws already permit criminal investigations and fines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

DeepSeek to launch Italian version of chatbot

Chinese AI start-up DeepSeek will launch a customised Italian version of its online chatbot following a probe by the Italian competition authority, the AGCM. The move follows months of negotiations and a temporary 2025 ban due to concerns over user data and transparency.

The AGCM had criticised DeepSeek for not sufficiently warning users about hallucinations or false outputs generated by its AI models.

The probe ended after DeepSeek agreed to clearer Italian disclosures and technical fixes to reduce hallucinations. The regulator noted that while improvements are commendable, hallucinations remain a global AI challenge.

DeepSeek now provides longer Italian warnings and detects Italian IPs or prompts for localised notices. The company also plans workshops to ensure staff understand Italian consumer law and has submitted multiple proposals to the AGCM since September 2025.

The start-up must provide a progress report within 120 days. Failure to meet the regulator’s requirements could lead to the probe being reopened and fines of up to €10 million (£8.7m).

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot