Virginia sets new limits on AI chatbots for minors

Lawmakers in Virginia are preparing fresh efforts to regulate AI as concerns grow over its influence on minors and vulnerable users.

Legislators will return in January with a set of proposals focused on limiting the capabilities of chatbots, curbing deepfakes and restricting automated ticket-buying systems. The push follows a series of failed attempts last year to define high-risk AI systems and expand protections for consumers.

Delegate Michelle Maldonado aims to introduce measures that restrict what conversational agents can say in therapeutic interactions instead of allowing them to mimic emotional support.

Her plans follow the well-publicised case of a sixteen-year-old who discussed suicidal thoughts with a chatbot before taking his own life. She argues that young people rely heavily on these tools and need stronger safeguards that recognise dangerous language and redirect users towards human help.

Maldonado will also revive a previous bill on high-risk AI, refining it to address particular sectors rather than broad categories.

Delegate Cliff Hayes is preparing legislation to require labels for synthetic media and to block AI systems from buying event tickets in bulk instead of letting automated tools distort prices.

Hayes already secured a law preventing predictions from AI tools from being the sole basis for criminal justice decisions. He warns that the technology has advanced too quickly for policy to remain passive and urges a balance between innovation and protection.

Proposals that come as the state continues to evaluate its regulatory environment under an executive order issued by Governor Glenn Youngkin.

The order directs AI systems to scan the state code for unnecessary or conflicting rules, encouraging streamlined governance instead of strict statutory frameworks. Observers argue that human oversight remains essential as legislators search for common ground on how far to extend regulatory control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia strengthens parent support for new social media age rules

Yesterday, Australia entered a new phase of its online safety framework after the introduction of the Social Media Minimum Age policy.

eSafety has established a new Parent Advisory Group to support families as the country transitions to enhanced safeguards for young people. The group held its first meeting, with the Commissioner underlining the need for practical and accessible guidance for carers.

The initiative brings together twelve organisations representing a broad cross-section of communities in Australia, including First Nations families, culturally diverse groups, parents of children with disability and households in regional areas.

Their role is to help eSafety refine its approach, so parents can navigate social platforms with greater confidence, rather than feeling unsupported during rapid regulatory change.

A group that will advise on parent engagement, offer evidence-informed insights and test updated resources such as the redeveloped Online Safety Parent Guide.

Their advice will aim to ensure materials remain relevant, inclusive and able to reach priority communities that often miss out on official communications.

Members will serve voluntarily until June 2026 and will work with eSafety to improve distribution networks and strengthen the national conversation on digital literacy. Their collective expertise is expected to shape guidance that reflects real family experiences instead of abstract policy expectations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ecuador and Latin America expand skills in ethical AI with UNESCO training

UNESCO is strengthening capacities in AI ethics and regulation across Ecuador and Latin America through two newly launched courses. The initiatives aim to enhance digital governance and ensure the ethical use of AI in the region.

The first course, ‘Regulation of Artificial Intelligence: A View from and towards Latin America,’ is taking place virtually from 19 to 28 November 2025.

Organised by UNESCO’s Social and Human Sciences Sector in coordination with UNESCO-Chile and CTS Lab at FLACSO Ecuador, the programme involves 30 senior officials from key institutions, including the Ombudsman’s Office and the Superintendency for Personal Data Protection.

Participants are trained on AI ethical principles, risks, and opportunities, guided by UNESCO’s 2021 Recommendation on the Ethics of AI.

The ‘Ethical Use of AI’ course starts next week for telecom and electoral officials. The 20-hour hybrid programme teaches officials to use UNESCO’s RAM to assess readiness and plan ethical AI strategies.

UNESCO aims to train 60 officials and strengthen AI ethics and regulatory frameworks in Ecuador and Chile. The programmes reflect a broader commitment to building inclusive, human-rights-oriented digital governance in Latin America.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Character AI blocks teen chat and introduces new interactive Stories feature

A new feature called ‘Stories’ from Character.AI allows users under 18 to create interactive fiction with their favourite characters. The move replaces open-ended chatbot access, which has been entirely restricted for minors amid concerns over mental health risks.

Open-ended AI chatbots can initiate conversations at any time, raising worries about overuse and addiction among younger users.

Several lawsuits against AI companies have highlighted the dangers, prompting Character.AI to phase out access for minors and introduce a guided, safety-focused alternative.

Industry observers say the Stories feature offers a safer environment for teens to engage with AI characters while continuing to explore creative content.

The decision aligns with recent AI regulations in California and ongoing US federal proposals to limit minors’ exposure to interactive AI companions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI and anonymity intensifies online violence against women

Digital violence against women is rising sharply, fuelled by AI, online anonymity, and weak legal protections, leaving millions exposed.

UN Women warns that abuse on digital platforms often spills into real life, threatening women’s safety, livelihoods, and ability to participate freely in public life.

Public figures, journalists, and activists are increasingly targeted with deepfakes, coordinated harassment campaigns, and gendered disinformation designed to silence and intimidate.

One in four women journalists report receiving online death threats, highlighting the urgent scale and severity of the problem.

Experts call for stronger laws, safer digital platforms, and more women in technology to address AI-driven abuse effectively. Investments in education, digital literacy, and culture-change programmes are also vital to challenge toxic online communities and ensure digital spaces promote equality rather than harm.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI scribes help reduce physician paperwork and burnout

A new UCLA Health study finds that AI-powered scribe tools can reduce physicians’ documentation time and may improve work satisfaction. Conducted across 14 specialities and 72,000 patient visits, the trial tested Microsoft DAX and Nabla in real-world clinical settings.

Physicians using Nabla reduced the time spent writing each note by almost 10% compared with usual care, saving around 41 seconds per note. Both AI tools modestly improved burnout, cognitive workload, and work exhaustion, but physician oversight remains essential.

The trial highlighted several limitations, including occasional inaccuracies in AI-generated notes and a single instance of mild patient safety concern. Physicians found the tools easy to use and noted an improvement in patient engagement, with most patients being receptive.

The findings provide timely evidence as healthcare systems increasingly adopt AI scribes. Researchers emphasise that rigorous evaluation is necessary to ensure patient safety and effectiveness, and that further long-term studies across multiple institutions are recommended.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI chatbots misidentify images they created

Growing numbers of online users are turning to AI chatbots to verify suspicious images, yet many tools are failing to detect fakes they created themselves. AFP found several cases in Asia where AI systems labelled fabricated photos as authentic, including a viral image of former Philippine lawmaker Elizaldy Co.

The failures highlight a lack of genuine visual analysis in current models. Many models are primarily trained on language patterns, resulting to inconsistent decisions even when dealing with images generated by the same generative systems.

Investigations also uncovered similar misidentifications during unrest in Pakistan-administered Kashmir, where AI models wrongly validated synthetic protest images. A Columbia University review reinforced the trend, with seven leading systems unable to verify any of the ten authentic news photos.

Specialists argue that AI may assist professional fact-checkers but cannot replace them. They emphasise that human verification remains essential as AI-generated content becomes increasingly lifelike and continues to circulate widely across social media platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI use by US immigration agents sparks concern

A US federal judge has condemned immigration agents in Chicago for using AI to draft use-of-force reports, warning that the practice undermines credibility. Judge Sara Ellis noted that one agent fed a short description and images into ChatGPT before submitting the report.

Body camera footage cited in the ruling showed discrepancies between events recorded and the written narrative. Experts say AI-generated accounts risk inaccuracies in situations where courts rely on an officer’s personal recollection to assess reasonableness.

Researchers argue that poorly supervised AI use could erode public trust and compromise privacy. Some warn that uploading images into public tools relinquishes control of sensitive material, exposing it to misuse.

Police departments across the US are still developing policies for safe deployment of generative tools. Several states now require officers to label AI-assisted reports, while specialists call for stronger guardrails before the technology is applied in high-stakes legal settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Up to 3 million UK jobs at risk from automation by 2035

A new report from NFER warns that up to 3 million low-skilled jobs in the UK could disappear by 2035 due to the growing adoption of automation and AI. Sectors most at risk include trades, machine operations and administrative work, where routine and repetitive tasks dominate.

Economic forecasts remain mixed. The overall UK labour market is expected to grow by 2.3 million jobs by 2035, with gains primarily in professional and managerial roles. Many displaced workers may struggle to find new employment, widening inequality.

The change contrasts with earlier predictions suggesting AI would target higher-skilled jobs such as consultancy or software engineering. Current findings emphasise that manual and lower-skill roles face the most significant short-term disruption from AI.

Policymakers and educators are encouraged to build extensive retraining programmes and foster skills like creativity, communication and digital literacy. Without such efforts, long-term unemployment could become a significant challenge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cyberattack disrupts services across multiple London boroughs

Multiple London councils are responding to a cyberattack that has disrupted shared IT systems and raised concerns about data exposure. Kensington and Chelsea and Westminster councils detected the incident on Monday and alerted the Information Commissioner’s Office as investigations began.

The councils say they are working with specialist incident teams and the National Cyber Security Centre (NCSC) to protect systems and keep key services running. Several platforms have been affected, and staff have been redeployed to support residents through monitored phone lines and email channels.

Hammersmith and Fulham, which shares IT services with the affected councils, has also reported disruption. Local leaders say it is too early to confirm who was responsible or whether personal data has been compromised. Overnight mitigation work has been carried out as monitoring continues.

Security researchers describe indications of a serious intrusion involving lateral movement across shared infrastructure. They warn that attackers may escalate to data theft or encryption, given the sensitivity of the information held by local authorities.

National security agencies and police are assessing the incident’s potential impact. Analysts say the attack highlights long-standing risks facing councils that manage extensive services on limited budgets and with inconsistent cyber safeguards.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!