TSA introduces a fee for travellers without ID

From 1 February, the US Transportation Security Administration will charge a $45 fee to travellers who arrive at airports without a valid form of identification, such as a REAL ID or passport.

A measure that is linked to the rollout of a new alternative identity verification system designed to modernise security checks.

The fee applies to passengers using TSA Confirm.ID, a process that may involve biometric or biographic verification. Even after payment, access to the secure area is not guaranteed, and the charge will remain non-refundable, valid for a period of ten days.

According to the TSA, the policy ensures that the traveller, instead of taxpayers, bears the cost of verifying insufficient identification. Officials have urged passengers to obtain a REAL ID or other approved documentation to avoid delays or missed flights.

The agency has indicated that travellers will be encouraged to pay the fee online before arrival. At the same time, further details are expected on how advance payment and verification will operate across different airports.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK plans ban on deepfake AI nudification apps

Britain plans to ban AI-nudification apps that digitally remove clothing from images. Creating or supplying these tools would become illegal under new proposals.

The offence would build on existing UK laws covering non-consensual sexual deepfakes and intimate image abuse. Technology Secretary Liz Kendall said developers and distributors would face harsh penalties.

Experts warn that nudification apps cause serious harm, mainly when used to create child sexual abuse material. Children’s Commissioner Dame Rachel de Souza has called for a total ban on the technology.

Child protection charities welcomed the move but want more decisive action from tech firms. The government said it would work with companies to stop children from creating or sharing nude images.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Kimwolf Android botnet linked to a record-breaking DDoS attacks

Cybersecurity researchers have uncovered a rapidly expanding Android botnet known as Kimwolf, which has already compromised approximately 1.8 million devices worldwide.

The malware primarily targets smart TVs, set-top boxes, and tablets connected to residential networks, with infections concentrated in countries including Brazil, India, the US, Argentina, South Africa, and the Philippines.

Analysis by QiAnXin XLab indicates that Kimwolf demonstrates a high degree of operational resilience.

Despite multiple disruptions to its command-and-control infrastructure, the botnet has repeatedly re-emerged with enhanced capabilities, including the adoption of Ethereum Name Service to harden its communications against takedown efforts.

Researchers also identified significant similarities between Kimwolf and AISURU, one of the most powerful botnets observed in recent years. Shared source code, infrastructure, and infection scripts suggest both botnets are operated by the same threat group and have coexisted on large numbers of infected devices.

AISURU has previously drawn attention for launching record-setting distributed denial-of-service attacks, including traffic peaks approaching 30 terabits per second.

The emergence of Kimwolf alongside such activity highlights the growing scale and sophistication of botnet-driven cyber threats targeting global internet infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Healthcare faces growing compliance pressure from AI adoption

AI is becoming a practical tool across healthcare as providers face rising patient demand, chronic disease and limited resources.

These AI systems increasingly support tasks such as clinical documentation, billing, diagnostics and personalised treatment instead of relying solely on manual processes, allowing clinicians to focus more directly on patient care.

At the same time, AI introduces significant compliance and safety risks. Algorithmic bias, opaque decision-making, and outdated training data can affect clinical outcomes, raising questions about accountability when errors occur.

Regulators are signalling that healthcare organisations cannot delegate responsibility to automated systems and must retain meaningful human oversight over AI-assisted decisions.

Regulatory exposure spans federal and state frameworks, including HIPAA privacy rules, FDA oversight of AI-enabled medical devices and enforcement under the False Claims Act.

Healthcare providers are expected to implement robust procurement checks, continuous monitoring, governance structures and patient consent practices as AI regulation evolves towards a more coordinated national approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US platforms signal political shift in DSA risk reports

Major online platforms have submitted their 2025 systemic risk assessments under the Digital Services Act as the European Commission moves towards issuing its first fine against a Very Large Online Platform.

The reports arrive amid mounting political friction between Brussels and Washington, placing platform compliance under heightened scrutiny on both regulatory and geopolitical fronts.

Several US-based companies adjusted how risks related to hate speech, misinformation and diversity are framed, reflecting political changes in the US while maintaining formal alignment with EU law.

Meta softened enforcement language, reclassified hate speech under broader categories and reduced visibility of civil rights structures, while continuing to emphasise freedom of expression as a guiding principle.

Google and YouTube similarly narrowed references to misinformation, replaced established terminology with less charged language and limited enforcement narratives to cases involving severe harm.

LinkedIn followed comparable patterns, removing references to earlier commitments on health misinformation, civic integrity and EU voluntary codes that have since been integrated into the DSA framework.

X largely retained its prior approach, although its report continues to reference cooperation with governments and civil society that contrasts with the platform’s public positioning.

TikTok diverged from other platforms by expanding disclosures on hate speech, election integrity and fact-checking, likely reflecting its vulnerability to regulatory action in both the EU and the US.

European regulators are expected to assess whether these shifts represent genuine risk mitigation or strategic alignment with US political priorities.

As systemic risk reports increasingly inform enforcement decisions, subtle changes in language, scope and emphasis may carry regulatory consequences well beyond their formal compliance function.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini users can now build custom AI mini-apps with Opal

Google has expanded the availability of Opal, a no-code experimental tool from Google Labs, by integrating it directly into the Gemini web application.

This integration allows users to build AI-powered mini-apps, known as Gems, without writing any code, using natural language descriptions and a visual workflow editor inside Gemini’s interface.

Previously available only via separate Google Labs experiments, Opal now appears in the Gems manager section of the Gemini web app, where users can describe the functionality they want and have Gemini generate a customised mini-app.

These mini-apps can be reused for specific tasks and workflows and saved as part of a user’s Gem collection.

The no-code ‘vibe-coding’ approach aims to democratise AI development by enabling creators, developers and non-technical users alike to build applications that automate or augment tasks, all through intuitive language prompts and visual building blocks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI adds pinned chat feature to ChatGPT apps

The US tech company, OpenAI, has begun rolling out a pinned chats feature in ChatGPT across web, Android and iOS, allowing users to keep selected conversations fixed at the top of their chat history for faster access.

The function mirrors familiar behaviour from messaging platforms such as WhatsApp and Telegram instead of requiring repeated scrolling through past chats.

Users can pin a conversation by selecting the three-dot menu on the web or by long-pressing on mobile devices, ensuring that essential discussions remain visible regardless of how many new chats are created.

An update that follows earlier interface changes aimed at helping users explore conversation paths without losing the original discussion thread.

Alongside pinned chats, OpenAI is moving ChatGPT toward a more app-driven experience through an internal directory that allows users to connect third-party services directly within conversations.

The company says these integrations support tasks such as bookings, file handling and document creation without switching applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Russia considers restoring Roblox access after compliance talks

Roblox has signalled willingness to comply with Russian law, opening the possibility of the platform being unblocked in Russia following earlier access restrictions.

Roskomnadzor stated that cooperation could resume if Roblox demonstrates concrete steps instead of declarations towards meeting domestic legal requirements.

The regulator said Roblox acknowledged shortcomings in moderating game content and ensuring the safety of user chats, particularly involving minors.

Russian authorities stressed that compliance would require systematic measures to remove harmful material and prevent criminal communication rather than partial adjustments.

Access to Roblox was restricted in early December after officials cited the spread of content linked to extremist and terrorist activity.

Roskomnadzor indicated that continued engagement and demonstrable compliance could allow the platform to restore operations under the regulatory oversight of Russia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands AI training for newsrooms worldwide

The US tech company, OpenAI, has launched the OpenAI Academy for News Organisations, a new learning hub designed to support journalists, editors and publishers adopting AI in their work.

An initiative that builds on existing partnerships with the American Journalism Project and The Lenfest Institute for Journalism, reflecting a broader effort to strengthen journalism as a pillar of democratic life.

The Academy goes live with practical training, newsroom-focused playbooks and real-world examples aimed at helping news teams save time and focus on high-impact reporting.

Areas of focus include investigative research, multilingual reporting, data analysis, production efficiency and operational workflows that sustain news organisations over time.

Responsible use sits at the centre of the programme. Guidance on governance, internal policies and ethical deployment is intended to address concerns around trust, accuracy and newsroom culture, recognising that AI adoption raises structural questions rather than purely technical ones.

OpenAI plans to expand the Academy in the year ahead with additional courses, case studies and live programming.

Through collaboration with publishers, industry bodies and journalism networks worldwide, the Academy is positioned as a shared learning space that supports editorial independence while adapting journalism to an AI-shaped media environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Gemini 3 Flash for scalable frontier AI

The US tech giant, Google, has unveiled Gemini 3 Flash, a new frontier AI model designed for developers who need high reasoning performance combined with speed and low cost.

Built on the multimodal and agentic foundations of Gemini 3 Pro, Gemini 3 Flash delivers faster responses at less than a quarter of the price, while surpassing Gemini 2.5 Pro across several major benchmarks.

The model is rolling out through the Gemini API, Google AI Studio, Vertex AI, Android Studio and other developer platforms, offering higher rate limits, batch processing and context caching that significantly reduce operational costs.

Gemini 3 Flash achieves frontier-level results on advanced reasoning benchmarks while remaining optimised for large-scale production workloads, reinforcing Google’s focus on efficiency alongside intelligence.

Early adopters are already deploying Gemini 3 Flash across coding, gaming, deepfake detection and legal document analysis, benefiting from improved agentic capabilities and near real-time multimodal reasoning.

By lowering cost barriers while expanding performance, Gemini 3 Flash enhances Google’s competitive position in the rapidly evolving AI model market. It broadens access to advanced AI systems for developers and enterprises.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!