AI fuels online abuse of women in public life

Generative AI is increasingly being weaponised to harass women in public roles, according to a new report commissioned by UN Women. Journalists, activists, and human rights defenders face AI-assisted abuse that endangers personal safety and democratic freedoms.

The study surveyed 641 women from 119 countries and found that nearly one in four of those experiencing online violence reported AI-generated or amplified abuse.

Writers, communicators, and influencers reported the highest exposure, with human rights defenders and journalists also at significant risk. Rapidly developing AI tools, including deepfakes, facilitate the creation to harmful content that spreads quickly on social media.

Online attacks often escalate into offline harm, with 41% of women linking online abuse to physical harassment, stalking, or intimidation. Female journalists are particularly affected, with offline attacks more than doubling over five years.

Experts warn that such violence threatens freedom of expression and democratic processes, particularly in authoritarian contexts.

Researchers call for urgent legal frameworks, platform accountability, and technological safeguards to prevent AI-assisted attacks on women. They advocate for human rights-focused AI design and stronger support systems to protect women in public life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU moves to extend child abuse detection rules

The European Commission has proposed extending the Interim Regulation that allows online service providers to voluntarily detect and report child sexual abuse instead of facing a legal gap once the current rules expire.

These measures would preserve existing safeguards while negotiations on permanent legislation continue.

The Interim Regulation enables providers of certain communication services to identify and remove child sexual abuse material under a temporary exemption from e-Privacy rules.

Without an extension beyond April 2026, voluntary detection would have to stop, making it easier for offenders to share illegal material and groom children online.

According to the Commission, proactive reporting by platforms has played a critical role for more than fifteen years in identifying abuse and supporting criminal investigations. Extending the interim framework until April 2028 is intended to maintain these protections until long-term EU rules are agreed.

The proposal now moves to the European Parliament and the Council, with the Commission urging swift agreement to ensure continued protection for children across the Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mac users lose ChatGPT voice access in 2026

OpenAI has confirmed that Voice interactions will stop working in the ChatGPT macOS app as of 15 January 2026, affecting users who rely on spoken conversations instead of typing.

The company states that the change is part of a broader effort to streamline voice experiences across its platforms.

Currently, the Mac app allows hands-free, real-time conversations with ChatGPT. After the deadline, voice functionality will remain accessible through chatgpt.com, as well as on iOS, Android, and the Windows app. OpenAI stresses that no other macOS features will be removed.

According to OpenAI, recent updates have already brought Voice mode closer to standard chat interactions on mobile and the web, allowing users to review earlier messages and engage with visual content while speaking.

The company has suggested that the existing macOS Voice feature may not support its next-generation approach.

Mac users will be able to continue using Voice mode until mid-January 2026. After this date, voice-based interactions will require switching to other supported platforms until a potential macOS update is introduced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TSA introduces a fee for travellers without ID

From 1 February, the US Transportation Security Administration will charge a $45 fee to travellers who arrive at airports without a valid form of identification, such as a REAL ID or passport.

A measure that is linked to the rollout of a new alternative identity verification system designed to modernise security checks.

The fee applies to passengers using TSA Confirm.ID, a process that may involve biometric or biographic verification. Even after payment, access to the secure area is not guaranteed, and the charge will remain non-refundable, valid for a period of ten days.

According to the TSA, the policy ensures that the traveller, instead of taxpayers, bears the cost of verifying insufficient identification. Officials have urged passengers to obtain a REAL ID or other approved documentation to avoid delays or missed flights.

The agency has indicated that travellers will be encouraged to pay the fee online before arrival. At the same time, further details are expected on how advance payment and verification will operate across different airports.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK plans ban on deepfake AI nudification apps

Britain plans to ban AI-nudification apps that digitally remove clothing from images. Creating or supplying these tools would become illegal under new proposals.

The offence would build on existing UK laws covering non-consensual sexual deepfakes and intimate image abuse. Technology Secretary Liz Kendall said developers and distributors would face harsh penalties.

Experts warn that nudification apps cause serious harm, mainly when used to create child sexual abuse material. Children’s Commissioner Dame Rachel de Souza has called for a total ban on the technology.

Child protection charities welcomed the move but want more decisive action from tech firms. The government said it would work with companies to stop children from creating or sharing nude images.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Kimwolf Android botnet linked to a record-breaking DDoS attacks

Cybersecurity researchers have uncovered a rapidly expanding Android botnet known as Kimwolf, which has already compromised approximately 1.8 million devices worldwide.

The malware primarily targets smart TVs, set-top boxes, and tablets connected to residential networks, with infections concentrated in countries including Brazil, India, the US, Argentina, South Africa, and the Philippines.

Analysis by QiAnXin XLab indicates that Kimwolf demonstrates a high degree of operational resilience.

Despite multiple disruptions to its command-and-control infrastructure, the botnet has repeatedly re-emerged with enhanced capabilities, including the adoption of Ethereum Name Service to harden its communications against takedown efforts.

Researchers also identified significant similarities between Kimwolf and AISURU, one of the most powerful botnets observed in recent years. Shared source code, infrastructure, and infection scripts suggest both botnets are operated by the same threat group and have coexisted on large numbers of infected devices.

AISURU has previously drawn attention for launching record-setting distributed denial-of-service attacks, including traffic peaks approaching 30 terabits per second.

The emergence of Kimwolf alongside such activity highlights the growing scale and sophistication of botnet-driven cyber threats targeting global internet infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Healthcare faces growing compliance pressure from AI adoption

AI is becoming a practical tool across healthcare as providers face rising patient demand, chronic disease and limited resources.

These AI systems increasingly support tasks such as clinical documentation, billing, diagnostics and personalised treatment instead of relying solely on manual processes, allowing clinicians to focus more directly on patient care.

At the same time, AI introduces significant compliance and safety risks. Algorithmic bias, opaque decision-making, and outdated training data can affect clinical outcomes, raising questions about accountability when errors occur.

Regulators are signalling that healthcare organisations cannot delegate responsibility to automated systems and must retain meaningful human oversight over AI-assisted decisions.

Regulatory exposure spans federal and state frameworks, including HIPAA privacy rules, FDA oversight of AI-enabled medical devices and enforcement under the False Claims Act.

Healthcare providers are expected to implement robust procurement checks, continuous monitoring, governance structures and patient consent practices as AI regulation evolves towards a more coordinated national approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US platforms signal political shift in DSA risk reports

Major online platforms have submitted their 2025 systemic risk assessments under the Digital Services Act as the European Commission moves towards issuing its first fine against a Very Large Online Platform.

The reports arrive amid mounting political friction between Brussels and Washington, placing platform compliance under heightened scrutiny on both regulatory and geopolitical fronts.

Several US-based companies adjusted how risks related to hate speech, misinformation and diversity are framed, reflecting political changes in the US while maintaining formal alignment with EU law.

Meta softened enforcement language, reclassified hate speech under broader categories and reduced visibility of civil rights structures, while continuing to emphasise freedom of expression as a guiding principle.

Google and YouTube similarly narrowed references to misinformation, replaced established terminology with less charged language and limited enforcement narratives to cases involving severe harm.

LinkedIn followed comparable patterns, removing references to earlier commitments on health misinformation, civic integrity and EU voluntary codes that have since been integrated into the DSA framework.

X largely retained its prior approach, although its report continues to reference cooperation with governments and civil society that contrasts with the platform’s public positioning.

TikTok diverged from other platforms by expanding disclosures on hate speech, election integrity and fact-checking, likely reflecting its vulnerability to regulatory action in both the EU and the US.

European regulators are expected to assess whether these shifts represent genuine risk mitigation or strategic alignment with US political priorities.

As systemic risk reports increasingly inform enforcement decisions, subtle changes in language, scope and emphasis may carry regulatory consequences well beyond their formal compliance function.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini users can now build custom AI mini-apps with Opal

Google has expanded the availability of Opal, a no-code experimental tool from Google Labs, by integrating it directly into the Gemini web application.

This integration allows users to build AI-powered mini-apps, known as Gems, without writing any code, using natural language descriptions and a visual workflow editor inside Gemini’s interface.

Previously available only via separate Google Labs experiments, Opal now appears in the Gems manager section of the Gemini web app, where users can describe the functionality they want and have Gemini generate a customised mini-app.

These mini-apps can be reused for specific tasks and workflows and saved as part of a user’s Gem collection.

The no-code ‘vibe-coding’ approach aims to democratise AI development by enabling creators, developers and non-technical users alike to build applications that automate or augment tasks, all through intuitive language prompts and visual building blocks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI adds pinned chat feature to ChatGPT apps

The US tech company, OpenAI, has begun rolling out a pinned chats feature in ChatGPT across web, Android and iOS, allowing users to keep selected conversations fixed at the top of their chat history for faster access.

The function mirrors familiar behaviour from messaging platforms such as WhatsApp and Telegram instead of requiring repeated scrolling through past chats.

Users can pin a conversation by selecting the three-dot menu on the web or by long-pressing on mobile devices, ensuring that essential discussions remain visible regardless of how many new chats are created.

An update that follows earlier interface changes aimed at helping users explore conversation paths without losing the original discussion thread.

Alongside pinned chats, OpenAI is moving ChatGPT toward a more app-driven experience through an internal directory that allows users to connect third-party services directly within conversations.

The company says these integrations support tasks such as bookings, file handling and document creation without switching applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!