Cisco report highlights cybersecurity risks and benefits of industrial AI

AI is becoming central to industrial networking strategies, but it is also creating new security challenges, according to Cisco’s 2026 State of Industrial AI Report.

Based on a survey of 1,000 professionals across 19 countries and 21 sectors, the report shows organisations view cybersecurity as both a barrier and an opportunity for AI adoption. About 40% cited cybersecurity concerns as a major obstacle, while 48% named security their biggest networking challenge.

At the same time, many organisations believe AI will strengthen their cyber resilience. Cisco noted that ‘while security gaps are limiting AI scale today, organisations view AI as a tool to strengthen detection, monitoring and resilience’.

The report also highlights organisational challenges, particularly collaboration between IT and operational technology teams. Only 20% of organisations report fully collaborative IT and OT cybersecurity operations, despite the growing importance of coordination for AI deployment.

Cisco said industrial AI adoption is accelerating, with 61% of organisations already deploying AI in industrial environments. However, only one in five reports mature, scaled adoption, suggesting many deployments remain in early stages.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Alibaba Qwen AI faces major disruption after one key leader steps down

Junyang Lin, a central technical leader of Alibaba’s Qwen AI project, has stepped down just one day after the company unveiled its Qwen 3.5 small models. Lin, who joined Alibaba in 2019 and joined the Qwen team in 2023, did not provide details about his decision.

His departure comes at a sensitive moment, as Qwen has emerged as one of China’s most prominent open-weight AI initiatives. The project is a core element of Alibaba’s strategy to compete with leading US developers such as OpenAI, Google, and Anthropic amid intensifying global AI competition.

Alibaba’s newly launched Qwen 3.5 Small Model series comprises four multimodal models with 0.8B to 9B parameters. The systems are designed for on-device deployment and lightweight AI agents, reflecting a focus on efficient and adaptable AI applications.

The release attracted attention from figures including Elon Musk, who commented on the models’ performance. Internally and across the AI ecosystem, including partners linked to Hugging Face, Lin’s exit was described as a significant loss, particularly given his role in advancing open-source development and strengthening global developer engagement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Crypto exchanges face strict 2027 reserve rules under new Brazil framework

Brazil’s central bank has introduced a regulatory framework requiring licensed crypto exchanges to prove asset sufficiency daily starting on 1 January 2027. The measures align digital asset intermediaries with banking standards on capital management, accounting, and data protection.

Under the rules, exchanges must submit daily attestations confirming that platforms hold adequate fiat and token reserves. Supervisors will review the reports to ensure companies can cover operational, liquidity, and cybersecurity risks while protecting customer balances.

The framework also mandates strict segregation of company and client assets. Exchanges must maintain separate accounts for customer fiat and digital holdings to prevent commingling of funds and improve transparency for regulators.

Platforms operating in Brazil will also be required to follow a specialised accounting manual for digital assets. Standardised rules for classification, valuation, and impairment aim to ensure financial statements clearly reflect exposures across regulated entities.

Authorities will expand oversight of cross-border transfers handled by domestic crypto exchanges. Platforms must report the origins of transactions and the blockchain pathways they follow. The central bank said the framework aims to strengthen resilience and protect customer funds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Santander and Mastercard complete Europe’s first AI agent payment

Spanish banking giant Banco Santander and Mastercard have completed what they describe as Europe’s first live end-to-end payment executed by an AI agent. The pilot combined Santander’s live payments infrastructure with Mastercard Agent Pay to enable autonomous, permission-based transactions.

Mastercard Agent Pay, launched in April 2025, allows AI agents to initiate and complete payments within predefined consumer limits. The transaction was orchestrated with support from PayOS and integrates Microsoft Azure OpenAI Service and Copilot Studio.

Following the pilot, Santander plans to expand testing and explore new partnerships across agentic commerce use cases. The bank, which manages around €1.84 trillion in assets, is positioning AI as a core driver of innovation.

AI initiatives at Santander are led by chief data and AI officer Ricardo Martín Manjón, hired from BBVA. A strategic partnership with OpenAI has also connected up to 30,000 employees to ChatGPT Enterprise in one of the fastest deployments of its kind.

Global competition in agentic payments is intensifying as Citi, US Bank and Westpac trial Mastercard Agent Pay. Westpac recently completed New Zealand’s first authenticated agentic transaction, while DBS, Visa, Axis Bank and RBL Bank are advancing similar intelligent commerce pilots.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI tool from MIT speeds up complex engineering optimisation

MIT researchers have developed a new AI approach that helps engineers solve complex design problems faster, from power grid optimisation to vehicle safety.

The method adapts a foundation model trained on tabular data, enabling high-dimensional optimisation without retraining and significantly speeding up results.

The system uses a foundation model with Bayesian optimisation to pinpoint the variables that most impact outcomes. Focusing on key variables, the model finds top solutions 10 to 100 times faster than existing optimisation methods.

Early tests show the approach excels in costly, time-consuming scenarios like car crash testing and power system design. The technique lowers computational demands and suits large-scale, high-frequency engineering challenges across multiple domains.

Researchers aim to expand the method to even higher-dimensional problems, such as naval ship design, while highlighting the broader potential of foundation models as algorithmic engines in scientific and engineering tools.

Experts see it as a practical step toward making advanced optimisation more accessible in real-world applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Developers gain early access to Gemini 3.1 Flash-Lite

Google’s Gemini 3.1 Flash-Lite has launched in preview for developers via AI Studio and for enterprises through Vertex AI. Designed for high-volume workloads, it promises fast, cost-effective performance while maintaining high-quality outputs.

Priced at just $0.25 per million input tokens and $1.50 per million output tokens, 3.1 Flash-Lite offers 2.5X faster response times and 45% higher output speed than the previous 2.5 Flash model.

Benchmarks show strong performance across reasoning and multimodal tasks, including an Elo score of 1432 on Arena.ai, 86.9% on GPQA Diamond, and 76.8% on MMMU Pro, surpassing some older, larger Gemini models.

The model also provides adaptive intelligence features, allowing developers to adjust how much the AI ‘thinks’ for each task. The model handles both high-frequency tasks, such as translation, and complex tasks, such as interface generation and simulations.

Early-access developers and companies report that 3.1 Flash-Lite handles complex workloads with precision comparable to larger models. Its speed, affordability, and reasoning capabilities make it an attractive choice for scalable, real-time AI applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Chrome moves to rapid releases as Google responds to AI disruption

Google is accelerating Chrome’s release cycle rather than maintaining its long-standing four-week cadence.

From September, users on desktop and mobile platforms will receive new stable versions every two weeks, doubling the frequency of feature milestones across speed, stability and usability. Weekly security updates introduced in 2023 remain unchanged.

The faster pace comes as AI-driven browsers seek a foothold in a market long dominated by Chrome.

Products, such as ChatGPT Atlas and Perplexity’s Comet, embed agentic assistants directly into the browsing experience, automating tasks from summarising pages to scheduling meetings.

Chrome has responded with deeper Gemini integration, including the rollout of autonomous features across its interface.

Google maintains that the accelerated schedule reflects the needs of the evolving web platform, arguing that developers require quicker access to updated tools.

Yet the timing aligns with growing competitive pressure from AI-native browsers, prompting speculation that Chrome’s dominance can no longer be taken for granted.

The shift will begin with Chrome version 153 in beta and stable channels on 8 September 2026. Enterprise administrators and Chromebook users will continue to rely on the eight-week Extended Stable branch, which remains unchanged for organisations that need slower, controlled deployments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Ripple expands stablecoin platform for global payments

The money-movement solution Ripple Payments has been expanded to integrate both traditional and digital payment rails. The upgrade strengthens its enterprise-grade platform, enabling custody, collections, and liquidity management while supporting global fintech expansion.

The company emphasised that the platform now processes fiat currencies and stablecoins on a single infrastructure.

Operating in more than 60 major markets, Ripple supports corporate on-chain treasury operations through managed custody and virtual account capabilities.

Recent acquisitions of Palisade and Rail have enhanced custody, treasury automation, virtual accounts, and collections, allowing firms to collect, hold, exchange, and pay out both fiat and stablecoins seamlessly.

The expanded platform offers named virtual accounts and wallet issuance, automated collection flows, fund exchange, and settlement functions. Managed custody supports large-scale wallet issuance, fast transaction signing, and transfers to operating accounts.

Companies can collect fiat and stablecoins in integrated accounts with automated FX conversion and settlement. Ripple highlighted its liquidity management expertise, enabling clients to deploy corporate assets optimally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU prepares tougher rules for older data centres

The European Commission is preparing more stringent requirements for ageing data centres rather than allowing legacy infrastructure to operate under looser rules.

A draft strategy tied to the EU’s tech sovereignty package signals that older sites will face higher efficiency expectations and stricter sustainability checks as part of an effort to modernise the digital backbone of the EU.

The proposal outlines minimum performance standards for new data centres by 2030, aiming to align the entire sector with the bloc’s climate and resilience goals. Officials want to reduce energy waste and improve monitoring across facilities that have long operated without uniform benchmarks.

The draft points to an expanded role for the Cloud and AI Development Act, which is expected to frame future obligations for cloud providers instead of relying on fragmented national measures.

Brussels sees consistent rules as essential for supporting secure cloud services, AI infrastructure and cross-border digital operations.

The strategy underscores that modernisation is central to the EU’s vision of tech sovereignty. Older centres would need upgrades to maintain compliance, ensuring that Europe’s digital infrastructure remains competitive, efficient and less dependent on external providers.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI ethics as societal infrastructure in the digital era

In recent days, social media has been alight with discussions about the 2014 series whose portrayal of AI and ethical dilemmas now feels remarkably prophetic: Silicon Valley. Fans and professionals alike are highlighting how the show’s depiction of AI, automated agents, and ethical dilemmas mirrors today’s real-world challenges. 

From algorithmic decision-making to AI shaping social and economic interactions, the series explores the boundaries, responsibilities, and societal impact of AI in ways that feel startlingly relevant. What once seemed like pure comedy is increasingly being seen as a warning, highlighting how the choices we make around AI and its ethical frameworks will shape whether the technology benefits society.

While the show dramatises these dilemmas for entertainment, the real world is now facing the same questions. Recent trends in generative AI, autonomous agents, and large-scale automated decision-making are bringing their predictions to life, raising urgent ethical questions for developers, policymakers, and society alike.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

The rise of AI ethics: from niche concern to central requirement

The growing influence of AI on society has propelled ethics from a theoretical discussion to a central factor in technological decision-making. Initially confined to academic debate, ethics in AI is now a guiding force in technological development. The impact of AI is becoming tangible across society, from employment and finance to online content.

Technical performance alone no longer defines success; the consequences of design choices have become morally and socially significant. Governments, international organisations, and corporations are responding by developing ethical frameworks. 

The EU AI Act, the OECD AI Principles, and numerous corporate codes of conduct signal that society expects AI systems to align with human values, demonstrating accountability, fairness, and trustworthiness. Ethical reflection has become a prerequisite for technological legitimacy and societal acceptance.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

Functions of AI ethics: trust, guidance, and societal risk

Ethical frameworks for AI fulfil multiple roles, balancing moral guidance with practical necessity. They build public trust between developers, organisations, and users, reassuring society that AI systems operate consistently with shared values.

For developers, ethical principles offer a blueprint for decision-making, helping anticipate societal impact and minimise unintended harm. Beyond guidance, AI ethics acts as a form of societal risk governance, allowing organisations to identify potential consequences before they manifest. 

By integrating ethics into design, AI systems become socially sustainable technologies, bridging technical capability with moral responsibility. The approach like this is particularly critical in high-stakes domains such as healthcare, finance, and law, where algorithmic decisions can significantly affect human well-being.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

The politics of AI ethics: regulatory theatre and corporate influence

Despite widespread adoption, AI ethics frameworks sometimes risk becoming regulatory theatre, where public statements signal commitment but fail to ensure meaningful action. Many organisations promote ethical AI principles, yet consistent enforcement and follow-through often lag behind these claims.

Even with their limitations, ethical frameworks are far from meaningless. They shape public discourse, influence policy, and determine which AI systems gain social legitimacy. The challenge lies in balancing credibility with practical impact, ensuring that ethical commitments are more than symbolic gestures. 

Social media platforms like X amplify this tension, with public scrutiny and viral debates exposing both successes and failures in applying ethical principles.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

AI ethics as a lens for technology and society

The prominence of AI ethics reflects a broader societal transformation in evaluating technology. Modern societies no longer judge AI solely by efficiency, speed, or performance; they assess social consequences, fairness, and the distribution of risks and benefits. 

AI is increasingly seen as a social actor rather than a neutral tool, influencing public behaviour, shaping social norms, and redefining concepts such as trust, autonomy, and accountability. Ethical evaluation of AI is not just a philosophical exercise, but demonstrate evolving expectations about the role technology should play in human life.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

AI ethics as early-warning governance for social impact

AI ethics functions as a critical early-warning system for society. Ethical principles anticipate harms that might otherwise go unnoticed, from systemic bias to privacy violations. By highlighting potential consequences, ethics enables organisations to act proactively, reducing the likelihood of crises and improving public trust. 

Moreover, ethics ensures that long-term impacts, including societal cohesion, equity, and fairness, are considered alongside immediate technical performance. In doing so, AI ethics bridges the gap between what AI can do and what society deems acceptable, ensuring that innovation remains aligned with moral and social norms.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

The bridge between technological power and social legitimacy

AI ethics remains the essential bridge between technological power and social legitimacy. Embedding ethical reflection into AI development ensures that innovation is not only technically effective but also socially sustainable, trustworthy, and accountable. 

Yet a growing tension defines the next phase of this evolution: the accelerating pace of innovation often outstrips the slower processes of ethical deliberation and regulation, raising questions about who sets the norms and how quickly societies can adapt.

Rather than acting solely as a safeguard, ethics is increasingly becoming a strategic dimension of technological leadership, shaping public trust, market adoption, and even geopolitical influence in the global race for AI. The rise of AI ethics, therefore, signals more than a moral awakening, reflecting a structural shift in how technological progress is evaluated and legitimised.

As AI continues to integrate into everyday life, ethical frameworks will determine not only how systems function, but also whether they are accepted as part of the social fabric. Aligning innovation with societal values is no longer optional but the condition under which AI can sustain legitimacy, unlock its full potential, and remain a transformative force that benefits society as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot