AI takes over eCommerce tasks as Visa and Mastercard adapt

Visa and Mastercard have announced major AI initiatives that could reshape the future of e-commerce, marking a significant step in the evolution of retail technology.

The initiatives—Visa’s Intelligent Commerce and Mastercard’s Agent Pay—move beyond traditional recommendation engines to empower AI agents to make purchases directly on behalf of consumers.

Visa is partnering with leading tech firms, including Anthropic, IBM, Microsoft, OpenAI, and Stripe, to build a system where AI agents shop according to user preferences.

Meanwhile, Mastercard’s Agent Pay integrates payment functionality into AI-driven conversational platforms, blending commerce and conversation into a seamless user experience.

These announcements follow years of AI integration into retail, with adoption growing at 40% annually and the market projected to surpass $8 billion by 2024. Retailers initially used AI for backend optimisation, but nearly 87% now apply it in customer-facing roles.

The next phase, where AI doesn’t just suggest but acts, is rapidly taking shape—backed by consumer demand for hyper-personalisation and efficiency.

Research suggests 71% of consumers want generative AI embedded in their shopping journeys, with 58% already turning to AI tools over traditional search engines for recommendations. However, consumer trust remains a challenge.

Satisfaction with AI dropped slightly last year, highlighting concerns over privacy and implementation quality—especially critical for financial transactions.

Visa and Mastercard’s moves reflect both opportunity and necessity. With 75% of retailers viewing AI agents as essential within the next year, and AI expected to handle 20% of eCommerce tasks, the payment giants are positioning themselves as indispensable infrastructure in a fast-changing market.

Their broad alliances across AI, payments, and tech underline a shared goal: to stay central as shopping behaviours evolve in the AI era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How AI could quietly sabotage critical software

When Google’s Jules AI agent added a new feature to a live codebase in under ten minutes, it initially seemed like a breakthrough. But the same capabilities that allow AI tools to scan, modify, and deploy code rapidly also introduce new, troubling possibilities—particularly in the hands of malicious actors.

Experts are now voicing concern over the risks posed by hostile agents deploying AI tools with coding capabilities. If weaponised by rogue states or cybercriminals, the tools could be used to quietly embed harmful code into public or private repositories, potentially affecting millions of lines of critical software.

Even a single unnoticed line among hundreds of thousands could trigger back doors, logic bombs, or data leaks. The risk lies in how AI can slip past human vigilance.

From modifying update mechanisms to exfiltrating sensitive data or weakening cryptographic routines, the threat is both technical and psychological.

Developers must catch every mistake; an AI only needs to succeed once. As such tools become more advanced and publicly available, the conversation around safeguards, oversight, and secure-by-design principles is becoming urgent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New York Times partners with Amazon on AI integration

The New York Times Company and Amazon have signed a multi-year licensing agreement that will allow Amazon to integrate editorial content from The New York Times, NYT Cooking, and The Athletic into a range of its AI-powered services, the companies announced Wednesday.

Under the deal, Amazon will use licensed content for real-time display in consumer-facing products such as Alexa, as well as for training its proprietary foundation models. The agreement marks an expansion of the firms’ existing partnership.

‘The agreement expands the companies’ existing relationship, and will deliver additional value to Amazon customers while bringing Times journalism to broader audiences,’ the companies said in a joint statement.

According to the announcement, the licensing terms include ‘real-time display of summaries and short excerpts of Times content within Amazon products and services’ alongside permission to use the content in AI model development. Amazon platforms will also feature direct links to full Times articles.

Both companies described the partnership as a reflection of a shared commitment to delivering global news and information across Amazon’s AI ecosystem. Financial details of the agreement were not made public.

The announcement comes amid growing industry debate about the role of journalistic material in training AI systems.

By entering a formal licensing arrangement, The New York Times positions itself as one of the first major media outlets to publicly align with a technology company for AI-related content use.

The companies have yet to name additional Amazon products that will feature Times content, and no timeline has been disclosed for the rollout of the new integrations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Croatia urged to embed human rights into AI law

Politiscope recently held an event at the Croatian Journalists’ Association to highlight the human rights risks of AI.

As Croatia begins drafting a national law to implement the EU AI Act, the event aimed to push for stronger protections and transparency instead of relying on vague promises of innovation.

Croatia’s working group is still forming key elements of the law, such as who will enforce it, making it an important moment for public input.

Experts warned that AI systems could increase surveillance, discrimination, and exclusion. Speakers presented troubling examples, including inaccurate biometric tools and algorithms that deny benefits or profile individuals unfairly.

Campaigners from across Europe, including EDRi, showcased how civil society has already stopped invasive AI tools in places like the Netherlands and Serbia. They argued that ‘values’ embedded in corporate AI systems often lack accountability and harm marginalised groups instead of protecting them.

Rather than presenting AI as a distant threat or a miracle cure, the event focused on current harms and the urgent need for safeguards. Speakers called for a public register of AI use in state institutions, a ban on biometric surveillance in public, and full civil society participation in shaping AI rules.

A panel urged Croatia to go beyond the EU Act’s baseline by embracing more transparent and citizen-led approaches.

Despite having submitted recommendations, Politiscope and other civil society organisations remain excluded from the working group drafting the law. While business groups and unions often gain access through social dialogue rules, CSOs are still sidelined.

Politiscope continues to demand an open and inclusive legislative process, arguing that democratic oversight is essential for AI to serve people instead of controlling them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU says US tech firms censor more

Far more online content is removed under US tech firms’ terms and conditions than under the EU’s Digital Services Act (DSA), according to Tech Commissioner Henna Virkkunen.

Her comments respond to criticism from American tech leaders, including Elon Musk, who have labelled the DSA a threat to free speech.

In an interview with Euractiv, Virkkunen said recent data show that 99% of content removals in the EU between September 2023 and April 2024 were carried out by platforms like Meta and X based on their own rules, not due to EU regulation.

Only 1% of cases involved ‘trusted flaggers’ — vetted organisations that report illegal content to national authorities. Just 0.001% of those reports led to an actual takedown decision by authorities, she added.

The DSA’s transparency rules made those figures available. ‘Often in the US, platforms have more strict rules with content,’ Virkkunen noted.

She gave examples such as discussions about euthanasia and nude artworks, which are often removed under US platform policies but remain online under European guidelines.

Virkkunen recently met with US tech CEOs and lawmakers, including Republican Congressman Jim Jordan, a prominent critic of the DSA and the DMA.

She said the data helped clarify how EU rules actually work. ‘It is important always to underline that the DSA only applies in the European territory,’ she said.

While pushing back against American criticism, Virkkunen avoided direct attacks on individuals like Elon Musk or Mark Zuckerberg. She suggested platform resistance reflects business models and service design choices.

Asked about delays in final decisions under the DSA — including open cases against Meta and X — Virkkunen stressed the need for a strong legal basis before enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AT&T hit by alleged 31 million record breach

A hacker has allegedly leaked data from 31 million AT&T customers, raising fresh concerns over the security of one of America’s largest telecom providers. The data, posted on a major dark web forum in late May 2025, is said to contain 3.1GB of customer information in both JSON and CSV formats.

Instead of isolated details, the breach reportedly includes highly sensitive data: full names, dates of birth, tax IDs, physical and email addresses, device and cookie identifiers, phone numbers, and IP addresses.

Cybersecurity firm DarkEye flagged the leak, warning that the structured formats make the data easy for criminals to exploit.

If verified, the breach would mark yet another major incident for AT&T. In March 2024, the company confirmed that personal information from 73 million users had been leaked.

Just months later, a July breach exposed call records and location metadata for nearly 110 million customers, with blame directed at compromised Snowflake cloud accounts.

AT&T has yet to comment on the latest claims. Experts warn that the combination of tax numbers and device data could enable identity theft, financial scams, and advanced phishing attacks.

For a company already under scrutiny for past security lapses, the latest breach could further damage public trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Victoria’s Secret website hit by cyber attack

Victoria’s Secret’s website has remained offline for three days due to a security incident the company has yet to fully explain. A spokesperson confirmed steps are being taken to address the issue, saying external experts have been called in and some in-store systems were also taken down as a precaution.

Instead of revealing specific details, the retailer has left users with only a holding message on a pink background. It has declined to comment on whether ransomware is involved, when the disruption began, or if law enforcement has been contacted.

The firm’s physical stores continue operating as normal, and payment systems are unaffected, suggesting the breach has hit other digital infrastructure. Still, the shutdown has rattled investors—shares fell nearly seven percent on Wednesday.

With online sales accounting for a third of Victoria’s Secret’s $6 billion annual revenue, the pressure to resolve the situation is high.

The timing has raised eyebrows, as cybercriminals often strike during public holidays like Memorial Day, when IT teams are short-staffed. The attack follows a worrying trend among retailers.

UK giants such as Harrods, Marks & Spencer, and the Co-op have all suffered recent breaches. Experts warn that US chains are becoming the next major targets, with threat groups like Scattered Spider shifting their focus across the Atlantic.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Copilot for Gaming now in testing on Xbox app for iOS and Android

Microsoft has begun rolling out the beta version of its new AI-powered tool, Copilot for Gaming, designed to enhance the gaming experience through personalised assistance. Available now in the Xbox app for iOS and Android, the feature lets users ask game-related questions and receive tailored responses based on their gaming history, achievements, and account data.

The AI assistant can provide tips to improve a gamer’s score, suggest games based on user preferences, and answer account-specific questions like subscription details or recent in-game milestones. It operates on a second screen to avoid disrupting gameplay and uses player activity and Bing search data to craft responses.

Initially available in English for players aged 18 and older, the beta spans over 50 countries, including the US, Canada, Brazil, Serbia, and Japan. Microsoft says more features are on the way, including personalised coaching and deeper in-game support, with plans to expand the rollout to additional regions in the future.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Telegram partners with Musk’s xAI

Elon Musk’s AI company, xAI, is partnering with Telegram to bring its AI assistant, Grok, to the messaging platform’s more than one billion users.

Telegram founder Pavel Durov announced that Grok will be integrated into Telegram’s apps and distributed directly through the service.

Instead of a simple tech integration, the arrangement includes a significant financial deal. Telegram is set to receive $300 million in cash and equity from xAI, along with half of the revenue from any xAI subscriptions sold through the platform. The agreement is expected to last one year.

The move mirrors Meta’s recent rollout of AI features on WhatsApp, which drew criticism from users concerned about the changing nature of private messaging.

Analysts like Hanna Kahlert of Midia Research argue that users still prefer using social platforms to connect with friends, and that adding AI tools could erode trust and shift focus away from what made these apps popular in the first place.

The partnership also links two controversial tech figures. Durov was arrested in France in 2024 over allegations that Telegram failed to curb criminal activity, though he denies obstructing law enforcement.

Meanwhile, Musk has been pushing into AI development after falling out with OpenAI, and is using xAI to rival industry giants. In March, he valued xAI at $80 billion after acquiring X, formerly known as Twitter.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The future of search: Personalised AI and the privacy crossroads

The rise of personalised AI is poised to radically reshape how we interact with technology, with search engines evolving into intelligent agents that not only retrieve information but also understand and act on our behalf. No longer just a list of links, search is merging into chatbots and AI agents that synthesise information from across the web to deliver tailored answers.

Google and OpenAI have already begun this shift, with services like AI Overview and ChatGPT Search leading a trend that analysts say could cut traditional search volume by 25% by 2026. That transformation is driven by the AI industry’s hunger for personal data.

To offer highly customised responses and assistance, AI systems require in-depth profiles of their users, encompassing everything from dietary preferences to political beliefs. The deeper the personalisation, the greater the privacy risks.

OpenAI, for example, envisions a ‘super assistant’ capable of managing nearly every aspect of your digital life, fed by detailed knowledge of your past interactions, habits, and preferences. Google and Meta are pursuing similar paths, with Mark Zuckerberg even imagining AI therapists and friends that recall your social context better than you do.

As these tools become more capable, they also grow more invasive. Wearable, always-on AI devices equipped with microphones and cameras are on the horizon, signalling an era of ambient data collection.

AI assistants won’t just help answer questions—they’ll book vacations, buy gifts, and even manage your calendar. But with these conveniences comes unprecedented access to our most intimate data, raising serious concerns over surveillance and manipulation.

Policymakers are struggling to keep up. Without a comprehensive federal privacy law, the US relies on a patchwork of state laws and limited federal oversight. Proposals to regulate data sharing, such as forcing Google to hand over user search histories to competitors like OpenAI and Meta, risk compounding the problem unless strict safeguards are enacted.

As AI becomes the new gatekeeper to the internet, regulators face a daunting task: enabling innovation while ensuring that the AI-powered future doesn’t come at the expense of our privacy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!