New ETSI standard defines cybersecurity rules for AI systems

ETSI has released ETSI EN 304 223, a new European Standard establishing baseline cybersecurity requirements for AI systems.

Approved by national standards bodies, the framework becomes the first globally applicable EN focused specifically on securing AI, extending its relevance beyond European markets.

The standard recognises that AI introduces security risks not found in traditional software. Threats such as data poisoning, indirect prompt injection and vulnerabilities linked to complex data management demand tailored defences instead of conventional approaches alone.

ETSI EN 304 223 combines established cybersecurity practices with targeted measures designed for the distinctive characteristics of AI models and systems.

Adopting a full lifecycle perspective, the ETSI framework defines thirteen principles across secure design, development, deployment, maintenance and end of life.

Alignment with internationally recognised AI lifecycle models supports interoperability and consistent implementation across existing regulatory and technical ecosystems.

ETSI EN 304 223 is intended for organisations across the AI supply chain, including vendors, integrators and operators, and covers systems based on deep neural networks, including generative AI.

Further guidance is expected through ETSI TR 104 159, which will focus on generative AI risks such as deepfakes, misinformation, confidentiality concerns and intellectual property protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

RCB to use AI cameras at Chinnaswamy Stadium for crowd management

The Royal Challengers Bengaluru (RCB) franchise has announced plans to install AI-enabled camera systems at M. Chinnaswamy Stadium in Bengaluru ahead of the upcoming Indian Premier League (IPL) season.

The AI cameras are intended to support stadium security teams by providing real-time crowd management, identifying high-density areas and aiding safer entry and exit flows.

The system will use computer vision and analytics to monitor spectators and alert authorities to potential bottlenecks or risks, helping security personnel intervene proactively. RCB officials say the technology is part of broader efforts to improve spectator experience and safety, particularly in large-crowd environments.

The move reflects the broader adoption of AI and video analytics tools in sports venues to enhance operational efficiency and public safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Verizon responds to major network outage

A large-scale network disruption has been confirmed by Verizon, affecting wireless voice, messaging, and mobile data services and leaving many customer devices operating in SOS mode across several regions.

The company acknowledged service interruptions during Wednesday afternoon and evening, while emergency calling capabilities remained available.

Additionally, the telecom provider issued multiple statements apologising for the disruption and pledged to provide account credits to impacted customers. Engineering teams were deployed throughout the incident, with service gradually restored later in the day.

Verizon advised users still experiencing connectivity problems to restart their devices once normal operations resumed.

Despite repeated updates, the company has not disclosed the underlying cause of the outage. Independent outage-tracking platforms described the incident as a severe breakdown in cellular connectivity, with most reports citing complete signal loss and mobile phone failures.

Verizon stated that further updates would be shared following internal reviews, while rival mobile networks reported no comparable disruptions during the same period.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare acquires Human Native to build a fair AI content licensing model

San Francisco-based company Cloudflare has acquired Human Native, an AI data marketplace designed to connect content creators with AI developers seeking high-quality training and inference material.

A move that reflects growing pressure to establish clearer economic rules for how online content is used by AI systems.

The acquisition is intended to help creators and publishers decide whether to block AI access entirely, optimise material for machine use, or license content for payment instead of allowing uncontrolled scraping.

Cloudflare says the tools developed through Human Native will support transparent pricing and fair compensation across the AI supply chain.

Human Native, founded in 2024 and backed by UK-based investors, focuses on structuring original content so it can be discovered, accessed and purchased by AI developers through standardised channels.

The team includes researchers and engineers with experience across AI research, design platforms and financial media.

Cloudflare argues that access to reliable and ethically sourced data will shape long-term competition in AI. By integrating Human Native into its wider platform, the company aims to support a more sustainable internet economy that balances innovation with creator rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brazil excluded from WhatsApp rival AI chatbot ban

WhatsApp has excluded Brazil from its new restriction on third-party general-purpose chatbots, allowing AI providers to continue operating on the platform despite a broader policy shift affecting other markets.

The decision follows action by the competition authority of Brazil, which ordered Meta to suspend elements of the policy while assessing whether the rules unfairly disadvantage rival chatbot providers in favour of Meta AI.

Developers have been informed that services linked to Brazilian phone numbers do not need to stop responding to users or issue service warnings.

Elsewhere, WhatsApp has introduced a 90-day grace period starting in mid-January, requiring chatbot developers to halt responses and notify users that services will no longer function on the app.

The policy applies to tools such as ChatGPT and Grok, while customer service bots used by businesses remain unaffected.

Italy has already secured a similar exemption after regulatory scrutiny, while the EU has opened an antitrust investigation into the new rules.

Meta continues to argue that general-purpose AI chatbots place technical strain on systems designed for business messaging instead of acting as an open distribution platform for AI services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU lawmakers push limits on AI nudity apps

More than 50 EU lawmakers have called on the European Commission to clarify whether AI-powered applications for nudity are prohibited under existing EU legislation, citing concerns about online harm and legal uncertainty.

The request follows public scrutiny of the Grok, owned by xAI, which was found to generate manipulated intimate images involving women and minors.

Lawmakers argue that such systems enable gender-based online violence and the production of child sexual abuse material instead of legitimate creative uses.

In their letter, lawmakers questioned whether current provisions under the EU AI Act sufficiently address nudification tools or whether additional prohibitions are required. They also warned that enforcement focused only on substantial online platforms risks leaving similar applications operating elsewhere.

While EU authorities have taken steps under the Digital Services Act to assess platform responsibilities, lawmakers stressed the need for broader regulatory clarity and consistent application across the digital market.

Further political debate on the issue is expected in the coming days.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia’s social media age limit prompts restrictions on millions of under-16 accounts

Major social media platforms restricted access to approximately 4.7 million accounts linked to children under 16 across Australia during early December, following the introduction of the national social media minimum age requirement.

Initial figures collected by eSafety indicate that platforms with high youth usage are already engaging in early compliance efforts.

Since the obligation took effect on 10 December, regulatory focus has shifted towards monitoring and enforcement instead of preparation, targeting services assessed as age-restricted.

Early data suggests meaningful steps are being taken, although authorities stress it remains too soon to determine whether platforms have achieved full compliance.

eSafety has emphasised continuous improvement in age-assurance accuracy, alongside the industry’s responsibility to prevent circumvention.

Reports indicate some under-16 accounts remain active, although early signals point towards reduced exposure and gradual behavioural change rather than immediate elimination.

Officials note that the broader impact of the minimum age policy will emerge over time, supported by a planned independent, longitudinal evaluation involving academic and youth mental health experts.

Data collection will continue to monitor compliance, platform migration trends and long-term safety outcomes for children and families in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Britain’s transport future tied to AI investment

AI is expected to play an increasingly important role in improving Britain’s road and rail networks. MPs highlighted its potential during a transport-focused industry summit in Parliament.

The Transport Select Committee chair welcomed government investment in AI and infrastructure. Road maintenance, connectivity and reduced delays were cited as priorities for economic growth.

UK industry leaders showcased AI tools that autonomously detect and repair potholes. Businesses said more intelligent systems could improve reliability while cutting costs and disruption.

Experts warned that stronger cybersecurity must accompany AI deployment. Safeguards are needed to protect critical transport infrastructure from external threats and misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Belgian hospital AZ Monica hit by cyberattack

A cyberattack hit AZ Monica hospital in Belgium, forcing the shutdown of all servers, cancellation of scheduled procedures, and transfer of critical patients. The hospital network, with campuses in Antwerp and Deurne, provides acute, outpatient, and specialised care to the local population.

The attack was detected at 6:32 a.m., prompting staff to disconnect systems proactively. While urgent care continues, non-urgent consultations and surgeries have been postponed due to restricted access to the digital medical record.

Seven critical patients were safely transferred with Red Cross support.

Authorities and hospital officials have launched an investigation, notifying police and prosecutors. Details of the attack remain unclear, and unverified reports of a ransom demand have not been confirmed.

The hospital emphasised that patient safety and continuity of care are top priorities.

Cyberattacks on hospitals can severely disrupt medical services, delay urgent treatments, and put patients’ lives at risk, highlighting the growing vulnerability of healthcare systems to digital threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft disrupts global RedVDS cybercrime network

Microsoft has launched a joint legal action in the US and the UK to dismantle RedVDS, a subscription service supplying criminals with disposable virtual computers for large-scale fraud. The operation with German authorities and Europol seized key domains and shut down the RedVDS marketplace.

RedVDS enabled sophisticated attacks, including business email compromise and real estate payment diversion schemes. Since March 2025, it has caused about US $40 million in US losses, hitting organisations like H2-Pharma and Gatehouse Dock Condominium Association.

Globally, over 191,000 organisations have been impacted by RedVDS-enabled fraud, often combined with AI-generated emails and multimedia impersonation.

Microsoft emphasises that targeting the infrastructure, rather than individual attackers, is key. International cooperation disrupted servers and payment networks supporting RedVDS and helped identify those responsible.

Users are advised to verify payment requests, use multifactor authentication, and report suspicious activity to reduce risk.

The civil action marks the 35th case by Microsoft’s Digital Crimes Unit, reflecting a sustained commitment to dismantling online fraud networks. As cybercrime evolves, Microsoft and partners aim to block criminals and protect people and organisations globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!