Spanish banking giant Banco Santander and Mastercard have completed what they describe as Europe’s first live end-to-end payment executed by an AI agent. The pilot combined Santander’s live payments infrastructure with Mastercard Agent Pay to enable autonomous, permission-based transactions.
Mastercard Agent Pay, launched in April 2025, allows AI agents to initiate and complete payments within predefined consumer limits. The transaction was orchestrated with support from PayOS and integrates Microsoft Azure OpenAI Service and Copilot Studio.
Following the pilot, Santander plans to expand testing and explore new partnerships across agentic commerce use cases. The bank, which manages around €1.84 trillion in assets, is positioning AI as a core driver of innovation.
AI initiatives at Santander are led by chief data and AI officer Ricardo Martín Manjón, hired from BBVA. A strategic partnership with OpenAI has also connected up to 30,000 employees to ChatGPT Enterprise in one of the fastest deployments of its kind.
Global competition in agentic payments is intensifying as Citi, US Bank and Westpac trial Mastercard Agent Pay. Westpac recently completed New Zealand’s first authenticated agentic transaction, while DBS, Visa, Axis Bank and RBL Bank are advancing similar intelligent commerce pilots.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google is accelerating Chrome’s release cycle rather than maintaining its long-standing four-week cadence.
From September, users on desktop and mobile platforms will receive new stable versions every two weeks, doubling the frequency of feature milestones across speed, stability and usability. Weekly security updates introduced in 2023 remain unchanged.
The faster pace comes as AI-driven browsers seek a foothold in a market long dominated by Chrome.
Products, such as ChatGPT Atlas and Perplexity’s Comet, embed agentic assistants directly into the browsing experience, automating tasks from summarising pages to scheduling meetings.
Chrome has responded with deeper Gemini integration, including the rollout of autonomous features across its interface.
Google maintains that the accelerated schedule reflects the needs of the evolving web platform, arguing that developers require quicker access to updated tools.
Yet the timing aligns with growing competitive pressure from AI-native browsers, prompting speculation that Chrome’s dominance can no longer be taken for granted.
The shift will begin with Chrome version 153 in beta and stable channels on 8 September 2026. Enterprise administrators and Chromebook users will continue to rely on the eight-week Extended Stable branch, which remains unchanged for organisations that need slower, controlled deployments.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission is preparing more stringent requirements for ageing data centres rather than allowing legacy infrastructure to operate under looser rules.
A draft strategy tied to the EU’s tech sovereignty package signals that older sites will face higher efficiency expectations and stricter sustainability checks as part of an effort to modernise the digital backbone of the EU.
The proposal outlines minimum performance standards for new data centres by 2030, aiming to align the entire sector with the bloc’s climate and resilience goals. Officials want to reduce energy waste and improve monitoring across facilities that have long operated without uniform benchmarks.
The draft points to an expanded role for the Cloud and AI Development Act, which is expected to frame future obligations for cloud providers instead of relying on fragmented national measures.
Brussels sees consistent rules as essential for supporting secure cloud services, AI infrastructure and cross-border digital operations.
The strategy underscores that modernisation is central to the EU’s vision of tech sovereignty. Older centres would need upgrades to maintain compliance, ensuring that Europe’s digital infrastructure remains competitive, efficient and less dependent on external providers.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In recent days, social media has been alight with discussions about the 2014 series whose portrayal of AI and ethical dilemmas now feels remarkably prophetic: Silicon Valley. Fans and professionals alike are highlighting how the show’s depiction of AI, automated agents, and ethical dilemmas mirrors today’s real-world challenges.
From algorithmic decision-making to AI shaping social and economic interactions, the series explores the boundaries, responsibilities, and societal impact of AI in ways that feel startlingly relevant. What once seemed like pure comedy is increasingly being seen as a warning, highlighting how the choices we make around AI and its ethical frameworks will shape whether the technology benefits society.
While the show dramatises these dilemmas for entertainment, the real world is now facing the same questions. Recent trends in generative AI, autonomous agents, and large-scale automated decision-making are bringing their predictions to life, raising urgent ethical questions for developers, policymakers, and society alike.
Source: Freepik
The rise of AI ethics: from niche concern to central requirement
The growing influence of AI on society has propelled ethics from a theoretical discussion to a central factor in technological decision-making. Initially confined to academic debate, ethics in AI is now a guiding force in technological development. The impact of AI is becoming tangible across society, from employment and finance to online content.
Technical performance alone no longer defines success; the consequences of design choices have become morally and socially significant. Governments, international organisations, and corporations are responding by developing ethical frameworks.
The EU AI Act, the OECD AI Principles, and numerous corporate codes of conduct signal that society expects AI systems to align with human values, demonstrating accountability, fairness, and trustworthiness. Ethical reflection has become a prerequisite for technological legitimacy and societal acceptance.
Source: Freepik
Functions of AI ethics: trust, guidance, and societal risk
Ethical frameworks for AI fulfil multiple roles, balancing moral guidance with practical necessity. They build public trust between developers, organisations, and users, reassuring society that AI systems operate consistently with shared values.
For developers, ethical principles offer a blueprint for decision-making, helping anticipate societal impact and minimise unintended harm. Beyond guidance, AI ethics acts as a form of societal risk governance, allowing organisations to identify potential consequences before they manifest.
By integrating ethics into design, AI systems become socially sustainable technologies, bridging technical capability with moral responsibility. The approach like this is particularly critical in high-stakes domains such as healthcare, finance, and law, where algorithmic decisions can significantly affect human well-being.
Source: Freepik
The politics of AI ethics: regulatory theatre and corporate influence
Despite widespread adoption, AI ethics frameworks sometimes risk becoming regulatory theatre, where public statements signal commitment but fail to ensure meaningful action. Many organisations promote ethical AI principles, yet consistent enforcement and follow-through often lag behind these claims.
Even with their limitations, ethical frameworks are far from meaningless. They shape public discourse, influence policy, and determine which AI systems gain social legitimacy. The challenge lies in balancing credibility with practical impact, ensuring that ethical commitments are more than symbolic gestures.
Social media platforms like X amplify this tension, with public scrutiny and viral debates exposing both successes and failures in applying ethical principles.
Source: Freepik
AI ethics as a lens for technology and society
The prominence of AI ethics reflects a broader societal transformation in evaluating technology. Modern societies no longer judge AI solely by efficiency, speed, or performance; they assess social consequences, fairness, and the distribution of risks and benefits.
AI is increasingly seen as a social actor rather than a neutral tool, influencing public behaviour, shaping social norms, and redefining concepts such as trust, autonomy, and accountability. Ethical evaluation of AI is not just a philosophical exercise, but demonstrate evolving expectations about the role technology should play in human life.
Source: Freepik
AI ethics as early-warning governance for social impact
AI ethics functions as a critical early-warning system for society. Ethical principles anticipate harms that might otherwise go unnoticed, from systemic bias to privacy violations. By highlighting potential consequences, ethics enables organisations to act proactively, reducing the likelihood of crises and improving public trust.
Moreover, ethics ensures that long-term impacts, including societal cohesion, equity, and fairness, are considered alongside immediate technical performance. In doing so, AI ethics bridges the gap between what AI can do and what society deems acceptable, ensuring that innovation remains aligned with moral and social norms.
Source: Freepik
The bridge between technological power and social legitimacy
AI ethics remains the essential bridge between technological power and social legitimacy. Embedding ethical reflection into AI development ensures that innovation is not only technically effective but also socially sustainable, trustworthy, and accountable.
Yet a growing tension defines the next phase of this evolution: the accelerating pace of innovation often outstrips the slower processes of ethical deliberation and regulation, raising questions about who sets the norms and how quickly societies can adapt.
Rather than acting solely as a safeguard, ethics is increasingly becoming a strategic dimension of technological leadership, shaping public trust, market adoption, and even geopolitical influence in the global race for AI. The rise of AI ethics, therefore, signals more than a moral awakening, reflecting a structural shift in how technological progress is evaluated and legitimised.
As AI continues to integrate into everyday life, ethical frameworks will determine not only how systems function, but also whether they are accepted as part of the social fabric. Aligning innovation with societal values is no longer optional but the condition under which AI can sustain legitimacy, unlock its full potential, and remain a transformative force that benefits society as a whole.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Europe is building a federated cloud and AI infrastructure intended to reduce reliance on US and Chinese technology providers and avoid ongoing strategic vulnerability.
The project, known as EURO-3C, was announced in Barcelona by Telefónica and is backed by the European Commission. More than seventy organisations across telecommunications, technology and emerging companies have joined the effort.
Architects of the scheme argue that linking national infrastructures into a shared network of nodes offers a realistic path forward, particularly as Europe cannot easily create a hyperscale cloud provider from scratch.
The initiative follows a series of US cloud outages that exposed the risks of excessive dependence on external infrastructure and raised questions about sovereignty, resilience and long-term competitiveness.
Commission officials described the programme as a way to build a secure cross-border digital ecosystem that supports industries such as automotive, e-health, public administration and sovereign government cloud.
Telefónica stressed that agentic AI, capable of taking autonomous actions, will play a central role in enabling Europe to develop technology rather than import it.
The partners view the project as a foundation for a unified and independent digital environment that strengthens industrial supply chains and prepares European sectors for the next phase of cloud and AI adoption.
They present the initiative as a significant step toward reducing strategic exposure while stimulating domestic innovation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Samsung has secured an agreement with Rakuten Mobile to deliver Open RAN-compliant 5G radios supporting a nationwide mobile network upgrade across Japan. Commercial deployment is expected to begin in 2026 following extensive testing of the cloud-native infrastructure.
Rakuten Mobile continues to expand its fully virtualised network architecture, designed to improve flexibility, performance, and vendor interoperability. The integration of Samsung equipment demonstrates growing industry confidence in Open RAN technology at large-scale commercial deployments.
Equipment supplied includes low-band and mid-band radios, alongside energy-efficient Massive MIMO systems operating in the 3.8 GHz spectrum. Compact hardware enables easier installation on buildings and street infrastructure while improving capacity in dense urban areas.
Executives from both companies highlighted ambitions to accelerate AI-enabled networks and global Open RAN adoption. Samsung also positioned the partnership as a step toward future 6G innovation and broader next-generation connectivity services.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The social media platform, X, has introduced a new ‘Paid Partnership’ label that creators can attach to posts to show when content is promotional instead of leaving audiences unsure about commercial intent.
An update that improves transparency for followers while meeting rules set by the Federal Trade Commission, which expects sponsored material to be disclosed clearly.
Creators previously relied on hashtags such as #ad or #paidpartnership instead of an integrated disclosure option. The new feature allows users to apply the label through a content-disclosure toggle either during posting or afterwards.
X’s product lead, Nikita Bier, said undisclosed promotions damage trust and weaken the platform’s integrity, so the tool is meant to support creators and regulators simultaneously.
X has been trying to build a stronger creator ecosystem by offering payouts, subscriptions and other incentives. Yet many creators still favour Instagram or YouTube over X as their primary channel, because those platforms have longer-standing monetisation tools.
The addition of a built-in label aligns X with broader industry practice and aims to regain credibility among advertisers and creators.
The company has also tightened API access, preventing programmatic replies unless a user is directly mentioned or quoted.
A change that seeks to limit LLM-generated spam instead of allowing automated responses to distort discussions or appear as fake engagement beneath sponsored content.
X hopes these combined measures will enhance authenticity around commercial posts.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The South Korean electronics company, Samsung, has completed a multi-cell test that brings its virtualised RAN software together with accelerated computing from NVIDIA.
A validation that took place in a realistic network environment confirms that the combined architecture is nearing commercial readiness as AI-native networks continue to evolve.
The company plans to highlight the achievement at Mobile World Congress 2026 as part of its broader push toward software-driven networks that use AI instead of fixed hardware optimisation.
Samsung will demonstrate an AI-based MIMO beamformer running on NVIDIA infrastructure, which offers operators higher throughput and improved spectral efficiency by extracting more value from existing spectrum.
NVIDIA and Samsung are also advancing a unified processor design that integrates CPU and GPU within a single chipset, enabling faster and more efficient data exchange.
Recently, Samsung integrated its vRAN software with the NVIDIA ARC Compact platform equipped with the Grace CPU and L4 GPU, taking another step toward commercial AI-RAN deployments.
The firm says that experience from large-scale vRAN rollouts and close collaboration with industry computing partners strengthens its position in delivering AI-powered network platforms for operators worldwide.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at Microsoft have identified phishing activity that abuses legitimate OAuth redirection behaviour instead of relying on credential theft.
Threat actors create malicious applications within attacker-controlled tenants and configure redirect pages that lead victims from trusted authentication domains to malware-delivery sites.
A technique that has been used against government and public-sector organisations and is designed to bypass email and browser defences by embedding URLs that appear genuine.
The attack begins with lures themed around documents, financial matters or meeting requests, each containing OAuth URLs crafted to trigger silent authentication.
These links intentionally use invalid parameters that trigger an error, prompting the identity provider to redirect the user to the attacker’s infrastructure.
Validation errors, session checks and Conditional Access evaluations provide attackers with information about session status without granting access to tokens, yet still deliver the victim to a malicious landing page.
Once redirected, victims encounter phishing frameworks or are served ZIP files containing shortcut files and HTML-based loaders. These PowerShell commands launch system discovery and extract files used for DLL side-loading.
Executing a legitimate process allows a malicious DLL to load unseen, decrypt the final payload and establish a connection to a remote command-and-control server for hands-on keyboard activity.
Microsoft Entra has removed identified malicious OAuth applications, although related activity continues to appear.
Microsoft emphasises that OAuth redirection follows standards such as RFC 6749 and RFC 9700, meaning attackers cannot exploit normal protocol behaviour instead of software vulnerabilities.
Stronger governance of OAuth applications, tighter consent controls and cross-domain monitoring are required to prevent trusted authentication flows from being turned into delivery paths for phishing and malware.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI-generated search summaries are reshaping online discovery and pushing Reddit to the forefront of global information flows.
The rise of Google’s AI Overview feature places curated AI summaries above traditional search results, encouraging users to rely on machine-generated syntheses instead of browsing lists of websites.
Reddit’s visibility surged after the platform agreed to data access partnerships with Google and OpenAI, enabling large language models to train on its vast archive of human conversations.
The platform’s user-generated discussions are increasingly prioritised because they provide commentary viewed as more neutral and less commercially influenced.
Research from Profound identifies Reddit as the most cited source across major AI platforms. Reddit’s rapid expansion reflects such a shift.
It has overtaken TikTok in the UK, according to Ofcom and now reports 116 million daily active users and more than one billion monthly users.
Communities built around niche interests, combined with voting systems and karma-driven credibility, create a structure that appeals to AI systems searching for grounded, human-authored content.
The platform’s design, centred on subreddits run by volunteer moderators, reinforces trust signals that large models can evaluate when generating AI Overview results.
As AI-powered search becomes the dominant interface for navigating the internet, Reddit’s role as a primary corpus for training and citation continues to expand, reshaping how people discover and verify information.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!