New ETSI standard defines cybersecurity rules for AI systems

ETSI has released ETSI EN 304 223, a new European Standard establishing baseline cybersecurity requirements for AI systems.

Approved by national standards bodies, the framework becomes the first globally applicable EN focused specifically on securing AI, extending its relevance beyond European markets.

The standard recognises that AI introduces security risks not found in traditional software. Threats such as data poisoning, indirect prompt injection and vulnerabilities linked to complex data management demand tailored defences instead of conventional approaches alone.

ETSI EN 304 223 combines established cybersecurity practices with targeted measures designed for the distinctive characteristics of AI models and systems.

Adopting a full lifecycle perspective, the ETSI framework defines thirteen principles across secure design, development, deployment, maintenance and end of life.

Alignment with internationally recognised AI lifecycle models supports interoperability and consistent implementation across existing regulatory and technical ecosystems.

ETSI EN 304 223 is intended for organisations across the AI supply chain, including vendors, integrators and operators, and covers systems based on deep neural networks, including generative AI.

Further guidance is expected through ETSI TR 104 159, which will focus on generative AI risks such as deepfakes, misinformation, confidentiality concerns and intellectual property protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

RCB to use AI cameras at Chinnaswamy Stadium for crowd management

The Royal Challengers Bengaluru (RCB) franchise has announced plans to install AI-enabled camera systems at M. Chinnaswamy Stadium in Bengaluru ahead of the upcoming Indian Premier League (IPL) season.

The AI cameras are intended to support stadium security teams by providing real-time crowd management, identifying high-density areas and aiding safer entry and exit flows.

The system will use computer vision and analytics to monitor spectators and alert authorities to potential bottlenecks or risks, helping security personnel intervene proactively. RCB officials say the technology is part of broader efforts to improve spectator experience and safety, particularly in large-crowd environments.

The move reflects the broader adoption of AI and video analytics tools in sports venues to enhance operational efficiency and public safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How autonomous vehicles shape physical AI trust

Physical AI is increasingly embedded in public and domestic environments, from self-driving vehicles to delivery robots and household automation. As intelligent machines begin to operate alongside people in shared spaces, trust emerges as a central condition for adoption instead of technological novelty alone.

Autonomous vehicles provide the clearest illustration of how trust must be earned through openness, accountability, and continuous engagement.

Self-driving systems address long-standing challenges such as road safety, congestion, and unequal access to mobility by relying on constant perception, rule-based behaviour, and fatigue-free operation.

Trials and early deployments suggest meaningful improvements in safety and efficiency, yet public confidence remains uneven. Social acceptance depends not only on performance outcomes but also on whether communities understand how systems behave and why specific decisions occur.

Dialogue plays a critical role at two levels. Ongoing communication among policymakers, developers, emergency services, and civil society helps align technical deployment with social priorities such as safety, accessibility, and environmental impact.

At the same time, advances in explainable AI allow machines to communicate intent and reasoning directly to users, replacing opacity with interpretability and predictability.

The experience of autonomous vehicles suggests a broader framework for physical AI governance centred on demonstrable public value, transparent performance data, and systems capable of explaining behaviour in human terms.

As physical AI expands into infrastructure, healthcare, and domestic care, trust will depend on sustained dialogue and responsible design rather than the speed of deployment alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Verizon responds to major network outage

A large-scale network disruption has been confirmed by Verizon, affecting wireless voice, messaging, and mobile data services and leaving many customer devices operating in SOS mode across several regions.

The company acknowledged service interruptions during Wednesday afternoon and evening, while emergency calling capabilities remained available.

Additionally, the telecom provider issued multiple statements apologising for the disruption and pledged to provide account credits to impacted customers. Engineering teams were deployed throughout the incident, with service gradually restored later in the day.

Verizon advised users still experiencing connectivity problems to restart their devices once normal operations resumed.

Despite repeated updates, the company has not disclosed the underlying cause of the outage. Independent outage-tracking platforms described the incident as a severe breakdown in cellular connectivity, with most reports citing complete signal loss and mobile phone failures.

Verizon stated that further updates would be shared following internal reviews, while rival mobile networks reported no comparable disruptions during the same period.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok faces perilous legal challenge over child safety concerns

British parents suing TikTok over the deaths of their children have called for greater accountability from the platform, as the case begins hearings in the United States. One of the claimants said social media companies must be held accountable for the content shown to young users.

Ellen Roome, whose son died in 2022, said the lawsuit is about understanding what children were exposed to online.

The legal filing claims the deaths were a foreseeable result of TikTok’s design choices, which allegedly prioritised engagement over safety. TikTok has said it prohibits content that encourages dangerous behaviour.

Roome is also campaigning for proposed legislation that would allow parents to access their children’s social media accounts after a death. She said the aim is to gain clarity and prevent similar tragedies.

TikTok said it removes most harmful content before it is reported and expressed sympathy for the families. The company is seeking to dismiss the case, arguing that the US court lacks jurisdiction.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Indian companies remain committed to AI spending

Almost all Indian companies plan to sustain AI spending even without near-term financial returns. A BCG survey shows 97 percent will keep investing, higher than the 94 percent global rate.

Corporate AI budgets in India are expected to rise to about 1.7 percent of revenue in 2026. Leaders see AI as a long-term strategic priority rather than a short-term cost.

Around 88 percent of Indian executives express confidence in AI generating positive business outcomes. That is above the global average of 82 percent, reflecting strong optimism among local decision-makers.

Despite enthusiasm, fewer Indian CEOs personally lead AI strategy than their global peers, and workforce AI skills lag international benchmarks. Analysts say talent and leadership alignment remain key as spending grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Smarter interconnects become essential for AI processors

AI workloads are placing unprecedented strain on system on chip interconnects. Designers face complexity that exceeds the limits of traditional manual engineering approaches.

Semiconductor engineers are increasingly turning to automated network on chip design. Algorithms now generate interconnect topologies optimised for bandwidth, latency, power and area.

Physically aware automation reduces wirelengths, congestion and timing failures. Industry specialists report dramatically shorter design cycles and more predictable performance outcomes.

As AI spreads from data centres to edge devices, interconnect automation is becoming essential. The shift enables smaller teams to deliver powerful, energy efficient processors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Technology is reshaping smoke alarm safety

Smoke alarms remain critical in preventing fatal house fires, according to fire safety officials. Real-life incidents show how early warnings can allow families to escape rapidly spreading blazes.

Modern fire risks are evolving, with lithium-ion batteries and e-bikes creating fast and unpredictable fires. These incidents can release toxic gases and escalate before flames are clearly visible.

Traditional smoke alarm technology continues to perform reliably despite changes in household risks. At the same time, intelligent and AI-based systems are being developed to detect danger sooner.

Reducing false alarms has become a priority, as nuisance alerts often lead people to turn off devices. Fire experts stress that a maintained, certified smoke alarm is far safer than no smoke alarm at all.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI hoax targets Kate Garraway and family

Presenter Kate Garraway has condemned a cruel AI-generated hoax that falsely showed her with a new boyfriend. The images appeared online shortly after the death of her husband, Derek Draper.

Fake images circulated mainly on Facebook through impersonation accounts using her name and likeness. Members of the public and even friends mistakenly believed the relationship was real.

The situation escalated when fabricated news sites began publishing false stories involving her teenage son Billy. Garraway described the experience as deeply hurtful during an already raw period.

Her comments followed renewed scrutiny of AI image tools and platform responsibility. Recent restrictions aim to limit harmful and misleading content generated using artificial intelligence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft obtains UK and US court orders to disable cybercrime infrastructure

Microsoft has obtained court orders in the United Kingdom and the United States to disrupt the cybercrime-as-a-service platform RedVDS, marking the first time its Digital Crimes Unit (DCU) has pursued a major civil action outside the US.

According to Microsoft, the legal action targeted infrastructure supporting RedVDS, a service that provided virtualised computing resources used in fraud and other cyber-enabled criminal activity. The company sought relief in the UK courts because elements of the platform’s infrastructure were hosted by a UK-based provider, and a significant number of affected victims were located in the UK.

It is reported that the action was conducted with support from Europol’s European Cybercrime Centre (EC3), as well as German authorities, including the Central Office for Combating Internet Crime (ZIT) at the Frankfurt-am-Main Public Prosecutor’s Office and the Criminal Police Office of the state of Brandenburg.

RedVDS operated on a subscription basis, with access reportedly available for approximately $24 per month. The service provided customers with short-lived virtual machines, which could be used to support activities such as phishing campaigns, hosting malicious infrastructure, and facilitating online fraud.

Microsoft states that RedVDS infrastructure has been used in a range of cyber-enabled criminal activities since September 2025, including business email compromise (BEC). In BEC cases, attackers impersonate trusted individuals or organisations to induce victims to transfer funds to accounts under the attackers’ control.

According to Microsoft’s assessment, users of the service targeted organisations across multiple sectors and regions. The real estate sector was among those affected, with estate agents, escrow agents, and title companies reportedly targeted in Australia and Canada. Microsoft estimates that several thousand organisations in that sector experienced some level of impact.

The company also noted that RedVDS users combined the service with other tools, including generative AI technologies, to scale operations, identify potential targets, and generate fraudulent content.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!