Enterprise AI security evolves as Cisco expands AI Defense capabilities

Cisco has announced a major update to its AI Defense platform as enterprise AI evolves from chat tools into autonomous agents. The company says AI security priorities are shifting from controlling outputs to protecting complex agent-driven systems.

The update strengthens end-to-end AI supply chain security by scanning third-party models, datasets, and tools used in development workflows. New inventory features help organisations track provenance and governance across AI resources.

Cisco has also expanded algorithmic red teaming through an upgraded AI Validation interface. The system enables adaptive multi-turn testing and aligns security assessments with NIST, MITRE, and OWASP frameworks.

Runtime protections now reflect the growing autonomy of AI agents. Cisco AI Defense inspects agent-to-tool interactions in real time, adding guardrails to prevent data leakage and malicious task execution.

Cisco says the update responds to the rapid operationalisation of AI across enterprises. The company argues that effective AI security now requires continuous visibility, automated testing, and real-time controls that scale with autonomy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US lawsuits target social media platforms for deliberate child engagement designs

A landmark trial has begun in Los Angeles, accusing Meta and Google’s YouTube of deliberately addicting children to their platforms.

The case is part of a wider series of lawsuits across the US seeking to hold social media companies accountable for harms to young users. TikTok and Snap settled before trial, leaving Meta and YouTube to face the allegations in court.

The first bellwether case involves a 19-year-old identified as ‘KGM’, whose claims could shape thousands of similar lawsuits. Plaintiffs allege that design features were intentionally created to maximise engagement among children, borrowing techniques from slot machines and the tobacco industry.

A trial that may see testimony from executives, including Meta CEO Mark Zuckerberg, and could last six to eight weeks.

Social media companies deny the allegations, emphasising existing safeguards and arguing that teen mental health is influenced by numerous factors, such as academic pressure, socioeconomic challenges and substance use, instead of social media alone.

Meta and YouTube maintain that they prioritise user safety and privacy while providing tools for parental oversight.

Similar trials are unfolding across the country. New Mexico is investigating allegations of sexual exploitation facilitated by Meta platforms, while Oakland will hear cases representing school districts.

More than 40 state attorneys general have filed lawsuits against Meta, with TikTok facing claims in over a dozen states. Outcomes could profoundly impact platform design, regulation and legal accountability for youth-focused digital services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU telecom simplification at risk as Digital Networks Act adds extra admin

The ambitions of the EU to streamline telecom rules are facing fresh uncertainty after a Commission document indicated that the Digital Networks Act may create more administrative demands for national regulators instead of easing their workload.

The plan to simplify long-standing procedures risks becoming more complex as officials examine the impact on oversight bodies.

Concerns are growing among telecom authorities and BEREC, which may need to adjust to new reporting duties and heightened scrutiny. The additional requirements could limit regulators’ ability to respond quickly to national needs.

Policymakers hoped the new framework would reduce bureaucracy and modernise the sector. The emerging assessment now suggests that greater coordination at the EU level may introduce extra layers of compliance at a time when regulators seek clarity and flexibility.

The debate has intensified as governments push for faster network deployment and more predictable governance. The prospect of heavier administrative tasks could slow progress rather than deliver the streamlined system originally promised.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU faces pressure to boost action on health disinformation

A global health organisation is urging the EU to make fuller use of its digital rules to curb health disinformation as concerns grow over the impact of deepfakes on public confidence.

Warnings point to a rising risk that manipulated content could reduce vaccine uptake instead of supporting informed public debate.

Experts argue that the Digital Services Act already provides the framework needed to limit harmful misinformation, yet enforcement remains uneven. Stronger oversight could improve platforms’ ability to detect manipulated content and remove inaccurate claims that jeopardise public health.

Campaigners emphasise that deepfake technology is now accessible enough to spread false narratives rapidly. The trend threatens vaccination campaigns at a time when several member states are attempting to address declining trust in health authorities.

The EU officials continue to examine how digital regulation can reinforce public health strategies. The call for stricter enforcement highlights the pressure on Brussels to ensure that digital platforms act responsibly rather than allowing misleading material to circulate unchecked.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Discord expands teen-by-default protection worldwide

Discord is preparing a global transition to teen-appropriate settings that will apply to all users unless they confirm they are adults.

The phased rollout begins in early March and forms part of the company’s wider effort to offer protection tailored to younger audiences rather than relying on voluntary safety choices. Controls will cover communication settings, sensitive content and access to age-restricted communities.

The update is based on an expanded age assurance system designed to protect privacy while accurately identifying users’ age groups. People can use facial age estimation on their own device or select identity verification handled by approved partners.

Discord will also rely on an age-inference model that runs quietly in the background. Verification results remain private, and documents are deleted quickly, with users able to appeal group assignments through account settings.

Stricter defaults will apply across the platform. Sensitive media will stay blurred unless a user is confirmed as an adult, and access to age-gated servers or commands will require verification.

Message requests from unfamiliar contacts will be separated, friend-request alerts will be more prominent and only adults will be allowed to speak on community stages instead of sharing the feature with teens.

Discord is complementing the update by creating a Teen Council to offer advice on future safety tools and policies. The council will include up to a dozen young users and aims to embed real teen insight in product development.

The global rollout builds on earlier launches in the UK and Australia, adding to an existing safety ecosystem that includes Teen Safety Assist, Family Centre, and several moderation tools intended to support positive and secure online interactions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Shadow AI becomes a new governance challenge for European organisations

Employees are adopting generative tools at work faster than organisations can approve or secure them, giving rise to what is increasingly described as ‘shadow AI‘. Unlike earlier forms of shadow IT, these tools can transform data, infer sensitive insights, and trigger automated actions beyond established controls.

For European organisations, the issue is no longer whether AI should be used, but how to regain visibility and control without undermining productivity, as shadow AI increasingly appears inside approved platforms, browser extensions, and developer tools, expanding risks beyond data leakage.

Security experts warn that blanket bans often push AI use further underground, reducing transparency and trust. Instead, guidance from EU cybersecurity bodies increasingly promotes responsible enablement through clear policies, staff awareness, and targeted technical controls.

Key mitigation measures include mapping AI use across approved and informal tools, defining safe prompt data, and offering sanctioned alternatives, with logging, least-privilege access, and approval steps becoming essential as AI acts across workflows.

With the EU AI Act introducing clearer accountability across the AI value chain, unmanaged shadow AI is also emerging as a compliance risk. As AI becomes embedded across enterprise software, organisations face growing pressure to make safe use the default rather than the exception.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU strengthens cyber defence after attack on Commission mobile systems

A cyber-attack targeting the European Commission’s central mobile infrastructure was identified on 30 January, raising concerns that staff names and mobile numbers may have been accessed.

The Commission isolated the affected system within nine hours instead of allowing the breach to escalate, and no mobile device compromise was detected.

Also, the Commission plans a full review of the incident to reinforce the resilience of internal systems.

Officials argue that Europe faces daily cyber and hybrid threats targeting essential services and democratic institutions, underscoring the need for stronger defensive capabilities across all levels of the EU administration.

CERT-EU continues to provide constant threat monitoring, automated alerts and rapid responses to vulnerabilities, guided by the Interinstitutional Cybersecurity Board.

These efforts support the broader legislative push to strengthen cybersecurity, including the Cybersecurity Act 2.0, which introduces a Trusted ICT Supply Chain to reduce reliance on high-risk providers.

Recent measures are complemented by the NIS2 Directive, which sets a unified legal framework for cybersecurity across 18 critical sectors, and the Cyber Solidarity Act, which enhances operational cooperation through the European Cyber Shield and the Cyber Emergency Mechanism.

Together, they aim to ensure collective readiness against large-scale cyber threats.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Czechia weighs under-15 social media ban as government debate intensifies

A ban on social media use for under-15s is being weighed in Czechia, with government officials suggesting the measure could be introduced before the end of the year.

Prime Minister Andrej Babiš has voiced strong support and argues that experts point to potential harm linked to early social media exposure.

France recently enacted an under-15 restriction, and a growing number of European countries are exploring similar limits rather than relying solely on parental guidance.

The discussion is part of a broader debate about children’s digital habits, with Czech officials also considering a ban on mobile phones in schools. Slovakia has already adopted comparable rules, giving Czech ministers another model to study as they work on their own proposals.

Not all political voices agree on the direction of travel. Some warn that strict limits could undermine privacy rights or diminish online anonymity, while others argue that educational initiatives would be more effective than outright prohibition.

UNICEF has cautioned that removing access entirely may harm children who rely on online platforms for learning or social connection instead of traditional offline networks.

Implementing a nationwide age restriction poses practical and political challenges. The government of Czechia heavily uses social media to reach citizens, complicating attempts to restrict access for younger users.

Age verification, fair oversight and consistent enforcement remain open questions as ministers continue consultations with experts and service providers.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Bitcoin cryptography safe as quantum threat remains distant

Quantum computing concerns around Bitcoin have resurfaced, yet analysis from CoinShares indicates the threat remains long-term. The report argues that quantum risk is an engineering challenge that gives Bitcoin ample time to adapt.

Bitcoin’s security relies on elliptic-curve cryptography. A sufficiently advanced quantum machine could, in theory, derive private keys using Shor’s algorithm, which requires millions of stable, error-corrected qubits, and remains far beyond current capability.

Network exposure is also limited. Roughly 1.6 million BTC is held in legacy addresses with visible public keys, yet only about 10,200 BTC is realistically targetable. Modern address formats further reduce the feasibility of attacks.

Debate continues over post-quantum upgrades, with researchers warning that premature changes could introduce new vulnerabilities. Market impact, for now, is viewed as minimal.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenClaw faces rising security pushback in South Korea

Major technology companies in South Korea are tightening restrictions on OpenClaw after rising concerns about security and data privacy.

Kakao, Naver and Karrot Market have moved to block the open-source agent within corporate networks, signalling a broader effort to prevent sensitive information from leaking into external systems.

Their decisions follow growing unease about how autonomous tools may interact with confidential material, rather than remaining contained within controlled platforms.

OpenClaw serves as a self-hosted agent that performs actions on behalf of a large language model, acting as the hands of a system that can browse the web, edit files and run commands.

Its ability to run directly on local machines has driven rapid adoption, but it has also raised concerns that confidential data could be exposed or manipulated.

Industry figures argue that companies are acting preemptively to reduce regulatory and operational risks by ensuring that internal materials never feed external training processes.

China has urged organisations to strengthen protections after identifying cases of OpenClaw running with inadequate safeguards.

Security analysts in South Korea warn that the agent’s open-source design and local execution model make it vulnerable to misuse, especially when compared to cloud-based chatbots that operate in more restricted environments.

Wiz researchers recently uncovered flaws in agents linked to OpenClaw that exposed personal information.

Despite the warnings, OpenClaw continues to gain traction among users who value its ability to automate complex tasks, rather than rely on manual workflows.

Some people purchase separate devices solely to run the agent, while an active South Korea community on X has drawn more than 1,800 members who exchange advice and share mitigation strategies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!