AI agents complete first secure transaction with Mastercard and PayOS

PayOS and Mastercard have completed the first live agentic payment using a Mastercard Agentic Token, marking a pivotal step for AI-driven commerce. The demonstration, powered by Mastercard Agent Pay, extends the tokenisation infrastructure that already underpins mobile payments and card storage.

The system enables AI agents to initiate payments while enforcing consent, authentication, and fraud checks, thereby forming what Mastercard refers to as the trust layer. It shows how card networks are preparing for agentic transactions to become central to digital commerce.

Mastercard’s Chief Digital Officer, Pablo Fourez, stated that the company is developing a secure and interoperable ecosystem for AI-driven payments, underpinned by tokenized credentials. The framework aims to prepare for a future where the internet itself supports native agentic commerce.

For PayOS, the milestone represents a shift from testing to commercialisation. Chief executive Johnathan McGowan said the company is now onboarding customers and offering tools for fraud prevention, payments risk management, and improved user experiences.

The achievement signals a broader transition as agentic AI moves from pilot to real-world deployment. If security models remain effective, agentic payments could soon differentiate platforms, merchants, and issuers, embedding autonomy into digital transactions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-powered Opera Neon browser launches with premium subscription

After its announcement in May, Opera has started rolling out Neon, its first AI-powered browser. Unlike traditional browsers, Neon is designed for professionals who want AI to simplify complex online workflows.

The browser introduces Tasks, which act like self-contained workspaces. AI can understand context, compare sources, and operate across multiple tabs simultaneously to manage projects more efficiently.

Neon also features cards and reusable AI prompts that users can customise or download from a community store, streamlining repeated actions and tasks.

Its standout tool, Neon Do, performs real-time on-screen actions such as opening tabs, filling forms, and gathering data, while keeping everything local. Opera says no data is shared, and all information is deleted after 30 days.

Neon is available by subscription at $19.90 per month. Invitations are limited during rollout, but Opera promises broader availability soon.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

California enacts first state-level AI safety law

In the US, California Governor Gavin Newsom has signed SB 53, a landmark law establishing transparency and safety requirements for large AI companies.

The legislation obliges major AI developers such as OpenAI, Anthropic, Meta, and Google DeepMind to disclose their safety protocols. It also introduces whistle-blower protections and a reporting mechanism for safety incidents, including cyberattacks and autonomous AI behaviour not covered by the EU AI Act.

Reactions across the industry have been mixed. Anthropic supported the law, while Meta and OpenAI lobbied against it, with OpenAI publishing an open letter urging Newsom not to sign. Tech firms have warned that state-level measures could create a patchwork of regulation that stifles innovation.

Despite resistance, the law positions California as a national leader in AI governance. Newsom said the state had demonstrated that it was possible to safeguard communities without stifling growth, calling AI ‘the new frontier in innovation’.

Similar legislation is under consideration in New York, while California lawmakers are also debating SB 243, a separate bill that would regulate AI companion chatbots.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New Facebook tools help creators boost fan engagement

Facebook has introduced new tools designed to help creators increase engagement and build stronger communities on the platform. The update includes fan challenges, custom badges for top contributors, and new insights to track audience loyalty.

Fan challenges allow creators with over 100,000 followers to issue prompts inviting fans to share content on a theme or event. Contributions are displayed in a dedicated feed, with a leaderboard ranking entries by reactions.

Challenges can run for a week or stretch over several months, giving creators flexibility in engaging their audiences.

Meta has also launched custom fan badges for creators with more than one million followers, enabling them to rename Top Fan badges each month. The feature gives elite-level fans extra recognition and strengthens the sense of community. Fans can choose whether to accept the custom badge.

To complement these features, Facebook adds new metrics showing the number of Top Fans on a page. These insights help creators measure engagement efforts and reward their most dedicated followers.

The tools are now available to eligible creators worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT gets family safety update with parental controls

OpenAI has introduced new parental controls for ChatGPT, giving families greater oversight of how teens use the AI platform. The tools, which are live for all users, allow parents to link accounts with their children and manage settings through a simple control dashboard.

The system introduces stronger safeguards for teen accounts, including filters on graphic or harmful content and restrictions on roleplay involving sex, violence or extreme beauty ideals.

Parents can also fine-tune features such as voice mode, memory, image generation, or set quiet hours when ChatGPT cannot be accessed.

A notification mechanism has been added to alert parents if a teen shows signs of acute distress, escalating to emergency services in critical cases. OpenAI said the controls were shaped by consultation with experts, advocacy groups, and policymakers and will be expanded as research evolves.

To complement the parental controls, a new online resource hub has been launched to help families learn how ChatGPT works and explore positive uses in study, creativity and daily life.

OpenAI also plans to roll out an age-prediction system that automatically applies teen-appropriate settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EDPB issues guidelines on GDPR-DSA tension for platforms

On 12 September 2025, the European Data Protection Board (EDPB) adopted draft guidelines detailing how online platforms should reconcile requirements under the GDPR and the Digital Services Act (DSA). The draft is now open for public consultation through 31 October.

The guidelines address key areas of tension, including proactive investigations, notice-and-action systems, deceptive design, recommender systems, age safety and transparency in advertising. They emphasise that DSA obligations must be implemented in ways consistent with GDPR principles.

For instance, the guidelines suggest that proactive investigations of illegal content should generally be grounded on ‘legitimate interests’, include safeguards for accuracy, and avoid automated decisions with legal effects.

Platforms are also told to provide users with non-profiling recommendation systems. The documents encourage data protection impact assessments (DPIAs) when identifying high risks.

The guidance also clarifies that the DSA does not override the GDPR. Platforms subject to both must ensure lawful, fair and transparent processing while integrating risk analysis and privacy by design. The draft guidelines include practical examples and cross-references to existing EDPB documents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE university bets on AI to secure global talent

Abu Dhabi’s Mohamed bin Zayed University of AI (MBZUAI) claims to have rapidly become central to the UAE’s ambition to lead in AI.

Founded six years ago, the state-backed institute has hired over 100 faculty, recruited students from 49 nations, and now counts more than 700 alumni. All students receive full scholarships, while professors enjoy freedom from chasing research grants.

The university works closely with G42, the UAE’s flagship AI firm, and has opened a research lab in Silicon Valley. It has already unveiled non-English language models, including Arabic, Kazakh, and Hindi, and recently launched K2 Think, an open-source reasoning model.

MBZUAI is part of a wider national strategy that pairs investment in semiconductor chips with the creation of a global talent pipeline. The UAE now holds over 188,000 AI chips, second only to the US, and aims for AI to contribute 20% of its non-oil GDP by 2031.

About 80% of graduates have remained in the country, aided by long-term residency incentives and tax-free salaries. Analysts say the university’s success will depend on whether it can sustain momentum and secure permanent endowments to outlast shifting UAE government priorities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Lufthansa to cut thousands of jobs as AI reshapes operations

Lufthansa Group announced it will cut 4,000 jobs by 2030 as part of a restructuring drive powered by AI and digitalisation. Most of the affected positions will be administrative roles in Germany, with operational staff largely unaffected.

The company said it aims to improve efficiency by reducing duplication across its airlines Lufthansa through the use of AI, SWISS, Austrian Airlines, Brussels Airlines and ITA Airways. It noted that advances in AI would streamline work and allow greater integration within the group.

Despite the job cuts, demand for flights remains high. Capacity is constrained by limited aircraft and engine supply, which has kept planes full and revenue strong. Lufthansa said it expects significantly higher profitability by the end of the decade.

The airline also confirmed plans for the largest fleet modernisation in its history, with over 230 new aircraft to be delivered by 2030, including 100 long-haul jets. Lufthansa employed more than 101,000 people in 2024 and posted revenue of €37.6 billion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hackers exploit flaw in two million Cisco devices

Hackers have targeted up to two million Cisco devices using a newly disclosed vulnerability in the company’s networking software. The flaw, tracked as CVE-2025-20352, affects all supported versions of Cisco IOS and IOS XE, which power many routers and switches.

Cisco confirmed that attackers have exploited the weakness in the wild, crashing systems, implanting malware, and potentially extracting sensitive data. The campaign builds on previous activity by the same threat group, which has also exploited Cisco Adaptive Security Appliance devices.

Attackers gained access after local administrator credentials were compromised, allowing them to implant malware and execute commands. The company’s Product Security Incident Response Team urged customers to upgrade immediately to fixed software releases to secure their systems.

The Canadian Centre for Cyber Security has warned organisations about sophisticated malware exploiting flaws in outdated Cisco ASA devices, urging immediate patching and stronger defences to protect critical systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Diplo explores AI and diplomacy in the Gulf

DiploFoundation has taken its work on AI and governance to the Gulf, with engagements in Oman and Qatar focused on how AI is reshaping diplomacy and policymaking. In Muscat, Jovan Kurbalija delivered a lecture on AI’s geopolitical implications, led a workshop on the future of digital diplomacy, and met with institutions advancing Oman’s National AI Strategy and innovation ecosystem.

In Doha, Diplo participated in the international conference AI Ethics: The Convergence of Technology and Diverse Moral Traditions. Dr Kurbalija joined a panel on transnational AI principles, discussing how diverse ethical and cultural frameworks can guide global standards for responsible AI.

Diplo in Gulf

The Gulf engagements highlighted the need to balance innovation with responsibility. Discussions focused on equipping government staff with AI expertise, ensuring technology is integrated into governance that reflects cultural values, and shaping diplomatic practice around collaboration with tech companies.

Diplo’s programme builds on its long-standing research into how Arabic and Islamic philosophical traditions can enrich global debates on AI. The initiative aims to advance inclusive, practical, and ethical approaches to AI in international policy and diplomacy by bringing these perspectives to the table.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!