New Kimwolf Android botnet linked to a record-breaking DDoS attacks

Cybersecurity researchers have uncovered a rapidly expanding Android botnet known as Kimwolf, which has already compromised approximately 1.8 million devices worldwide.

The malware primarily targets smart TVs, set-top boxes, and tablets connected to residential networks, with infections concentrated in countries including Brazil, India, the US, Argentina, South Africa, and the Philippines.

Analysis by QiAnXin XLab indicates that Kimwolf demonstrates a high degree of operational resilience.

Despite multiple disruptions to its command-and-control infrastructure, the botnet has repeatedly re-emerged with enhanced capabilities, including the adoption of Ethereum Name Service to harden its communications against takedown efforts.

Researchers also identified significant similarities between Kimwolf and AISURU, one of the most powerful botnets observed in recent years. Shared source code, infrastructure, and infection scripts suggest both botnets are operated by the same threat group and have coexisted on large numbers of infected devices.

AISURU has previously drawn attention for launching record-setting distributed denial-of-service attacks, including traffic peaks approaching 30 terabits per second.

The emergence of Kimwolf alongside such activity highlights the growing scale and sophistication of botnet-driven cyber threats targeting global internet infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Healthcare faces growing compliance pressure from AI adoption

AI is becoming a practical tool across healthcare as providers face rising patient demand, chronic disease and limited resources.

These AI systems increasingly support tasks such as clinical documentation, billing, diagnostics and personalised treatment instead of relying solely on manual processes, allowing clinicians to focus more directly on patient care.

At the same time, AI introduces significant compliance and safety risks. Algorithmic bias, opaque decision-making, and outdated training data can affect clinical outcomes, raising questions about accountability when errors occur.

Regulators are signalling that healthcare organisations cannot delegate responsibility to automated systems and must retain meaningful human oversight over AI-assisted decisions.

Regulatory exposure spans federal and state frameworks, including HIPAA privacy rules, FDA oversight of AI-enabled medical devices and enforcement under the False Claims Act.

Healthcare providers are expected to implement robust procurement checks, continuous monitoring, governance structures and patient consent practices as AI regulation evolves towards a more coordinated national approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US platforms signal political shift in DSA risk reports

Major online platforms have submitted their 2025 systemic risk assessments under the Digital Services Act as the European Commission moves towards issuing its first fine against a Very Large Online Platform.

The reports arrive amid mounting political friction between Brussels and Washington, placing platform compliance under heightened scrutiny on both regulatory and geopolitical fronts.

Several US-based companies adjusted how risks related to hate speech, misinformation and diversity are framed, reflecting political changes in the US while maintaining formal alignment with EU law.

Meta softened enforcement language, reclassified hate speech under broader categories and reduced visibility of civil rights structures, while continuing to emphasise freedom of expression as a guiding principle.

Google and YouTube similarly narrowed references to misinformation, replaced established terminology with less charged language and limited enforcement narratives to cases involving severe harm.

LinkedIn followed comparable patterns, removing references to earlier commitments on health misinformation, civic integrity and EU voluntary codes that have since been integrated into the DSA framework.

X largely retained its prior approach, although its report continues to reference cooperation with governments and civil society that contrasts with the platform’s public positioning.

TikTok diverged from other platforms by expanding disclosures on hate speech, election integrity and fact-checking, likely reflecting its vulnerability to regulatory action in both the EU and the US.

European regulators are expected to assess whether these shifts represent genuine risk mitigation or strategic alignment with US political priorities.

As systemic risk reports increasingly inform enforcement decisions, subtle changes in language, scope and emphasis may carry regulatory consequences well beyond their formal compliance function.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini users can now build custom AI mini-apps with Opal

Google has expanded the availability of Opal, a no-code experimental tool from Google Labs, by integrating it directly into the Gemini web application.

This integration allows users to build AI-powered mini-apps, known as Gems, without writing any code, using natural language descriptions and a visual workflow editor inside Gemini’s interface.

Previously available only via separate Google Labs experiments, Opal now appears in the Gems manager section of the Gemini web app, where users can describe the functionality they want and have Gemini generate a customised mini-app.

These mini-apps can be reused for specific tasks and workflows and saved as part of a user’s Gem collection.

The no-code ‘vibe-coding’ approach aims to democratise AI development by enabling creators, developers and non-technical users alike to build applications that automate or augment tasks, all through intuitive language prompts and visual building blocks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI adds pinned chat feature to ChatGPT apps

The US tech company, OpenAI, has begun rolling out a pinned chats feature in ChatGPT across web, Android and iOS, allowing users to keep selected conversations fixed at the top of their chat history for faster access.

The function mirrors familiar behaviour from messaging platforms such as WhatsApp and Telegram instead of requiring repeated scrolling through past chats.

Users can pin a conversation by selecting the three-dot menu on the web or by long-pressing on mobile devices, ensuring that essential discussions remain visible regardless of how many new chats are created.

An update that follows earlier interface changes aimed at helping users explore conversation paths without losing the original discussion thread.

Alongside pinned chats, OpenAI is moving ChatGPT toward a more app-driven experience through an internal directory that allows users to connect third-party services directly within conversations.

The company says these integrations support tasks such as bookings, file handling and document creation without switching applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Russia considers restoring Roblox access after compliance talks

Roblox has signalled willingness to comply with Russian law, opening the possibility of the platform being unblocked in Russia following earlier access restrictions.

Roskomnadzor stated that cooperation could resume if Roblox demonstrates concrete steps instead of declarations towards meeting domestic legal requirements.

The regulator said Roblox acknowledged shortcomings in moderating game content and ensuring the safety of user chats, particularly involving minors.

Russian authorities stressed that compliance would require systematic measures to remove harmful material and prevent criminal communication rather than partial adjustments.

Access to Roblox was restricted in early December after officials cited the spread of content linked to extremist and terrorist activity.

Roskomnadzor indicated that continued engagement and demonstrable compliance could allow the platform to restore operations under the regulatory oversight of Russia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands AI training for newsrooms worldwide

The US tech company, OpenAI, has launched the OpenAI Academy for News Organisations, a new learning hub designed to support journalists, editors and publishers adopting AI in their work.

An initiative that builds on existing partnerships with the American Journalism Project and The Lenfest Institute for Journalism, reflecting a broader effort to strengthen journalism as a pillar of democratic life.

The Academy goes live with practical training, newsroom-focused playbooks and real-world examples aimed at helping news teams save time and focus on high-impact reporting.

Areas of focus include investigative research, multilingual reporting, data analysis, production efficiency and operational workflows that sustain news organisations over time.

Responsible use sits at the centre of the programme. Guidance on governance, internal policies and ethical deployment is intended to address concerns around trust, accuracy and newsroom culture, recognising that AI adoption raises structural questions rather than purely technical ones.

OpenAI plans to expand the Academy in the year ahead with additional courses, case studies and live programming.

Through collaboration with publishers, industry bodies and journalism networks worldwide, the Academy is positioned as a shared learning space that supports editorial independence while adapting journalism to an AI-shaped media environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Gemini 3 Flash for scalable frontier AI

The US tech giant, Google, has unveiled Gemini 3 Flash, a new frontier AI model designed for developers who need high reasoning performance combined with speed and low cost.

Built on the multimodal and agentic foundations of Gemini 3 Pro, Gemini 3 Flash delivers faster responses at less than a quarter of the price, while surpassing Gemini 2.5 Pro across several major benchmarks.

The model is rolling out through the Gemini API, Google AI Studio, Vertex AI, Android Studio and other developer platforms, offering higher rate limits, batch processing and context caching that significantly reduce operational costs.

Gemini 3 Flash achieves frontier-level results on advanced reasoning benchmarks while remaining optimised for large-scale production workloads, reinforcing Google’s focus on efficiency alongside intelligence.

Early adopters are already deploying Gemini 3 Flash across coding, gaming, deepfake detection and legal document analysis, benefiting from improved agentic capabilities and near real-time multimodal reasoning.

By lowering cost barriers while expanding performance, Gemini 3 Flash enhances Google’s competitive position in the rapidly evolving AI model market. It broadens access to advanced AI systems for developers and enterprises.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNDP and UNESCO support AI training for judiciary

UNESCO and UNDP have partnered to enhance judicial capacity on the ethical use of AI. A three-day Bangkok training, supported by the Thailand Institute of Justice, brought together 27 judges from 13 Asia-Pacific countries to discuss the impact of AI on justice and safeguards for fairness.

Expert sessions highlighted the global use of AI in court administration, research, and case management, emphasising opportunities and risks. Participants explored ways to use AI ethically while protecting human rights and judicial integrity, warning that unsupervised tools could increase bias and undermine public trust.

Trainers emphasised that AI must be implemented with careful attention to bias, transparency, and structural inequalities.

Judges reflected on the growing complexity of verifying evidence in the age of generative AI and deepfakes, and acknowledged that responsible AI can improve access to justice, support case reviews, and free time for substantive decision-making.

The initiative concluded with a consensus that AI adoption in courts should be guided by governance, transparency, and ongoing dialogue. The UNDP will continue to collaborate in advancing ethical, human rights-focused AI in regional judiciaries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-generated ads face new disclosure rules in South Korea

South Korea will require advertisers to label AI-generated or AI-assisted advertising from early 2026, marking a shift in how the country governs AI in online commerce and consumer protection.

The measure responds to a sharp rise in deceptive ads using synthetic imagery and deepfakes, particularly in healthcare and financial promotions. Regulators say transparency at the point of content delivery is intended to reduce manipulation and restore consumer trust.

Authorities in South Korea acknowledge that mandatory labelling alone may not deter malicious actors, who can bypass rules through offshore hosting or rapidly changing content. Detection challenges and uneven enforcement capacity across platforms remain open concerns.

South Korea’s industry groups warn that the policy could have uneven economic effects within the country’s advertising ecosystem. Large platforms and agencies are expected to adapt quickly, while smaller firms may face higher compliance costs that slow experimentation with generative tools.

Policymakers argue the framework aligns with South Korea’s broader AI governance strategy, positioning the country between innovation-led and precautionary regulatory models as synthetic media becomes more widespread.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!