Visa and Mastercard have announced major AI initiatives that could reshape the future of e-commerce, marking a significant step in the evolution of retail technology.
The initiatives—Visa’s Intelligent Commerce and Mastercard’s Agent Pay—move beyond traditional recommendation engines to empower AI agents to make purchases directly on behalf of consumers.
Visa is partnering with leading tech firms, including Anthropic, IBM, Microsoft, OpenAI, and Stripe, to build a system where AI agents shop according to user preferences.
Meanwhile, Mastercard’s Agent Pay integrates payment functionality into AI-driven conversational platforms, blending commerce and conversation into a seamless user experience.
These announcements follow years of AI integration into retail, with adoption growing at 40% annually and the market projected to surpass $8 billion by 2024. Retailers initially used AI for backend optimisation, but nearly 87% now apply it in customer-facing roles.
The next phase, where AI doesn’t just suggest but acts, is rapidly taking shape—backed by consumer demand for hyper-personalisation and efficiency.
Research suggests 71% of consumers want generative AI embedded in their shopping journeys, with 58% already turning to AI tools over traditional search engines for recommendations. However, consumer trust remains a challenge.
Satisfaction with AI dropped slightly last year, highlighting concerns over privacy and implementation quality—especially critical for financial transactions.
Visa and Mastercard’s moves reflect both opportunity and necessity. With 75% of retailers viewing AI agents as essential within the next year, and AI expected to handle 20% of eCommerce tasks, the payment giants are positioning themselves as indispensable infrastructure in a fast-changing market.
Their broad alliances across AI, payments, and tech underline a shared goal: to stay central as shopping behaviours evolve in the AI era.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Just one week after releasing its most advanced AI models to date — Opus 4 and Sonnet 4 — Anthropic CEO Dario Amodei warned in an interview with Axios that AI could soon reshape the job market in alarming ways.
AI, he said, may be responsible for eliminating up to half of all entry-level white-collar roles within the next one to five years, potentially driving unemployment as high as 10% to 20%.
Amodei’s goal in speaking publicly is to help workers prepare and to urge both AI companies and governments to be more transparent about coming changes. ‘Most of them [workers] are unaware that this is about to happen,’ he told Axios. ‘It sounds crazy, and people just don’t believe it.’
According to Amodei, the shift from AI augmenting jobs to fully automating them could begin as soon as two years from now. He highlighted how widespread displacement may threaten democratic stability and deepen inequality, as large groups of people lose the ability to generate economic value.
Despite these warnings, Amodei explained that competitive pressures prevent developers from slowing down. Regulatory caution in the US, he suggested, would only result in countries like China advancing more rapidly.
Still, not all implications are negative. Amodei pointed to major breakthroughs in other areas, such as healthcare, as part of the broader impact of AI.
‘Cancer is cured, the economy grows at 10% a year, the budget is balanced — and 20% of people don’t have jobs,’ he said.
To prepare society, Amodei called for increased public awareness, encouraging individuals to reconsider career paths and avoid the most automation-prone fields.
He referenced the Anthropic Economic Index, which monitors how AI affects different occupations. At its launch in February, the index showed that 57% of AI use cases still supported human tasks rather than replacing them.
However, during a press-only session at Code with Claude, Amodei noted that augmentation is likely to be a short-term strategy. He described a ‘rising waterline’ — the gradual shift from assistance to full replacement — which may soon outpace efforts to retain human roles.
‘When I think about how to make things more augmentative, that is a strategy for the short and the medium term — in the long term, we are all going to have to contend with the idea that everything humans do is eventually going to be done by AI systems. That is a constant. That will happen,’ he said.
His other recommendations included boosting AI literacy and equipping public officials with a deeper understanding of superintelligent systems, so they can begin forming policy for a radically transformed economy.
While Amodei’s outlook may sound daunting, it echoes a pattern seen throughout history: every major technological disruption brings workforce upheaval. Though some roles vanish, others emerge. Several studies suggest AI may even highlight the continued relevance of distinctively human skills.
Regardless of the outcome, one thing remains clear — learning to work with AI has never been more important.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The New York Times Company and Amazon have signed a multi-year licensing agreement that will allow Amazon to integrate editorial content from The New York Times, NYT Cooking, and The Athletic into a range of its AI-powered services, the companies announced Wednesday.
Under the deal, Amazon will use licensed content for real-time display in consumer-facing products such as Alexa, as well as for training its proprietary foundation models. The agreement marks an expansion of the firms’ existing partnership.
‘The agreement expands the companies’ existing relationship, and will deliver additional value to Amazon customers while bringing Times journalism to broader audiences,’ the companies said in a joint statement.
According to the announcement, the licensing terms include ‘real-time display of summaries and short excerpts of Times content within Amazon products and services’ alongside permission to use the content in AI model development. Amazon platforms will also feature direct links to full Times articles.
Both companies described the partnership as a reflection of a shared commitment to delivering global news and information across Amazon’s AI ecosystem. Financial details of the agreement were not made public.
The announcement comes amid growing industry debate about the role of journalistic material in training AI systems.
By entering a formal licensing arrangement, The New York Times positions itself as one of the first major media outlets to publicly align with a technology company for AI-related content use.
The companies have yet to name additional Amazon products that will feature Times content, and no timeline has been disclosed for the rollout of the new integrations.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
It feels like just yesterday that the internet was buzzing over the first renditions of OpenAI’s DALL·E tool, with millions competing to craft the funniest, weirdest prompts and sharing the results across social media. The sentiment was clear: the public was fascinated by the creative potential of this new technology.
But beneath the laughter and viral memes was a quieter, more uneasy question: what happens when AI not only generates quirky artwork, but begins to reshape our daily lives, both online and off? As it turns out, that process was already underway behind the scenes, and we were none the wiser.
AI in action: How the entertainment industry is using it today
Three years later, we have reached a point where AI’s influence seems to have passed the point of no return. The entertainment industry was among the first to embrace this technology, and starting with the 2025 Academy Awards, films that incorporate AI are now eligible for Oscar nominations.
That decision has been met with mixed reactions, to put it lightly. While some have praised the industry’s eagerness to explore new technological frontiers, others have claimed that AI greatly diminishes the human contribution to the art of filmmaking and therefore takes away the essence of the seventh art form.
The first wave of AI-enhanced storytelling
One recent example is the film The Brutalist, in which AI was used to refine Adrien Brody’s Hungarian dialogue to sound more authentic. Such a move that sparked both technical admiration and creative scepticism.
With AI now embedded in everything from voiceovers to entire digital actors, we are only beginning to confront what it truly means when creativity is no longer exclusively human.
Adrien Brody’s Hungarian dialogue in ‘The Brutalist’ was subject to generative AI to make it sound more authentic.
Screenshot / YouTube/ Oscars
Setting the stage: AI in the spotlight
The first major big-screen resurrection occurred in 1994’s The Crow, where Brandon Lee’s sudden passing mid-production forced the studio to rely on body doubles, digital effects, and existing footage to complete his scenes. However, it was not until 2016 that audiences witnessed the first fully digital revival.
In Rogue One: A Star Wars Story, Peter Cushing’s character was brought back to life using a combination of CGI, motion capture, and a facial stand-in. Although primarily reliant on traditional VFX, the project paved the way for future use of deepfakes and AI-assisted performance recreation across movies, TV shows, and video games.
Afterward, some speculated that studios tied to Peter Cushing’s legacy, such as Tyburn Film Productions, could pursue legal action against Disney for reviving his likeness without direct approval. While no lawsuit was filed, questions were raised about who owns a performer’s digital identity after death.
The digital Jedi: How AI helped recreate Luke Skywalker
Fate would have it that AI’s grand debut would take place in a galaxy far, far away, with the surprise appearance of Luke Skywalker in the Season 2 finale of The Mandalorian (spoiler alert). The moment thrilled fans and marked a turning point for the franchise, but it was more than just fan service.
Here’s the twist: Mark Hamill did not record any new voice lines. Instead, actor Max Lloyd-Jones performed the physical role, while Hamill’s de-aged voice was recreated with the help of Respeecher, a Ukrainian company specialising in AI-driven speech synthesis.
Impressed by their work, Disney turned to Respeecher once again, this time to recreate James Earl Jones’s iconic Darth Vader voice for the Obi-Wan Kenobi miniseries. Using archival recordings that Jones signed over for AI use, the system synthesised new dialogue that perfectly matched the intonation and timbre of his original trilogy performances.
Screenshot / YouTube / Star Wars
AI in moviemaking: Preserving legacy or crossing a line?
The use of AI to preserve and extend the voices of legendary actors has been met with a mix of admiration and unease. While many have praised the seamless execution and respect shown toward the legacy of both Hamill and Jones, others have raised concerns about consent, creative authenticity, and the long-term implications of allowing AI to perform in place of humans.
In both cases, the actors were directly involved or gave explicit approval, but these high-profile examples may be setting a precedent for a future where that level of control is not guaranteed.
A notable case that drew backlash was the planned use of a fully CGI-generated James Dean in the unreleased film Finding Jack, decades after his death. Critics and fellow actors have voiced strong opposition, arguing that bringing back a performer without their consent reduces them to a brand or asset, rather than honouring them as an artist.
AI in Hollywood: Actors made redundant?
What further heightened concerns among working actors was the launch of Promise, a new Hollywood studio built entirely around generative AI. Backed by wealthy investors, Promise is betting big on Muse, a GenAI tool designed to produce high-quality films and TV series at a fraction of the cost and time required for traditional Hollywood productions.
Filmmaking is a business, after all, and with production budgets ballooning year after year, AI-powered entertainment sounds like a dream come true for profit-driven studios.
Meta’s recent collaboration with Blumhouse Productions on MovieGen only adds fuel to the fire, signalling that major players are eager to explore a future where storytelling may be driven as much by algorithms as by authentic artistry.
AI in gaming: Automation or artistic collapse?
Speaking of entertainment businesses, we cannot ignore the world’s most popular entertainment medium: gaming. While the pandemic triggered a massive boom in game development and player engagement, the momentum was short-lived.
As profits began to slump in the years that followed, the industry was hit by a wave of layoffs, prompting widespread internal restructuring and forcing publishers to rethink their business models entirely. In hopes of cost-cutting, AAA companies had their eye on AI as their one saving grace.
Nvidia developing AI chips, along with Ubisoft and EA investing in AI and machine learning, have sent clear signals to the industry: automation is no longer just a backend tool, it is a front-facing strategy.
With AI-assisted NPC behaviour and AI voice acting, game development is shifting toward faster, cheaper, and potentially less human-driven production. In response, game developers have become concerned about their future in the industry, and actors are less inclined to sign off their rights for future projects.
AI voice acting in video games
In an attempt to compete with wealthier studios, even indie developers have turned to GenAI to replicate the voices of celebrity voice actors. Tools like ElevenLabs and Altered Studio offer a seemingly straightforward way to get high-quality talent, but if only it were that simple.
Copyright laws and concerns over authenticity remain two of the strongest barriers to the widespread adoption of AI-generated voices. especially as many consumers still view the technology as a crutch rather than a creative tool for game developers.
The legal landscape around AI-generated voices remains murky. In many places, the rights to a person’s voice, or its synthetic clone, are poorly defined, creating loopholes developers can exploit.
AI voice cloning challenges legal boundaries in gaming
The legal ambiguity has fuelled a backlash from voice actors, who argue that their performances are being mimicked without consent or pay. SAG-AFTRA and others began pushing for tighter legal protections in 2023.
A notable flashpoint came in 2025, when Epic Games faced criticism for using an AI-generated Darth Vader voice in Fortnite. SAG-AFTRA filed a formal complaint, citing licensing concerns and a lack of actor involvement.
Not all uses have been controversial. CD Projekt Red recreated the voice of the late Miłogost Reczek in Cyberpunk 2077: Phantom Liberty, with his family’s blessing, thus setting a respectful precedent for the ethical use of AI.
How AI is changing music production and artist Identity
AI is rapidly reshaping music production, with a recent survey showing that nearly 25% of producers are already integrating AI tools into their creative workflows. This shift reflects a growing trend in how technology is influencing composition, mixing, and even vocal performance.
Artists like Imogen Heap are embracing the change with projects like Mogen, an AI version of herself that can create music and interact with fans, blurring the line between human creativity and digital innovation.
Major labels are also experimenting: Universal Music has recently used AI to reimagine Brenda Lee’s 1958 classic in Spanish, preserving the spirit of the original while expanding its cultural reach.
AI and the future of entertainment
As AI becomes more embedded in entertainment, the line between innovation and exploitation grows thinner. What once felt like science fiction is now reshaping the way stories are told, and who gets to tell them.
Whether AI becomes a tool for creative expansion or a threat to human artistry will depend on how the industry and audiences choose to engage with it in the years ahead. As in any business, consumers vote with their wallets, and only time will tell whether AI and authenticity can truly go hand-in-hand.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Melania Trump has released an audiobook version of her memoir, but the voice readers hear isn’t hers in the traditional sense. Instead, it’s an AI-generated replica, created under her guidance and produced using technology from ElevenLabs.
Announcing the release as ‘The AI Audiobook,’ Trump declared this innovation as a step into the future of publishing, highlighting how AI is now entering mainstream media production. That move places AI-generated content into the public spotlight, especially as tech companies like Google and OpenAI are rolling out advanced tools to create audio, video, and even entire scenes with minimal human input.
While experts note that a complete replacement of voice actors and media professionals is unlikely in the immediate future, Trump’s audiobook represents a notable shift that aligns with rising interest from television and media companies looking to explore AI integration to compete with social media creators.
A NEW ERA IN PUBLISHING I am honored to bring you Melania – The AI Audiobook – narrated entirely using artificial intelligence in my own voice.
Industry observers suggest this trend could lead to a more interactive form of media. Imagine, for instance, engaging in a two-way conversation with a virtual Melania Trump about her book.
Though this level of interactivity isn’t here yet, it’s on the horizon as companies experiment with AI-generated personalities and digital avatars to enhance viewer engagement and create dynamic experiences. Still, the growth of generative AI sparks concern about job security in creative fields.
While some roles, like voiceover work, are vulnerable to automation, others—especially those requiring human insight and emotional intelligence, like investigative journalism—remain more resistant. Rather than eliminating jobs outright, AI may reshape media employment, demanding hybrid skills that combine traditional storytelling with technological proficiency.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Arab Centre for Artificial Intelligence (ACAI) and India’s Universal AI University (UAI) have partnered through a Memorandum of Understanding (MoU) to accelerate the advancement of AI across Qatar and the broader region. That collaboration aims to enhance education, research, and innovation in AI and emerging technologies.
Together, ACAI and UAI plan to establish a specialised AI research centre and develop advanced training programs to cultivate local expertise. They will also launch various online and short-term educational courses designed to address the growing demand for skilled AI professionals in Qatar’s job market, ensuring that the workforce is well-prepared for future technological developments.
Looking forward, the partnership envisions creating a dedicated AI-focused university campus. The initiative aligns with Qatar’s vision to transition into a knowledge-based economy by fostering innovation and offering academic programs in AI, engineering, business administration, environmental sustainability, and other emerging technologies.
The MoU is valid for ten years and includes provisions for dispute resolution, intellectual property rights management, and annual reviews to ensure tangible and sustainable outcomes. Further detailed implementation agreements are expected to formalise the partnership’s operational aspects.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Latin America is uniquely positioned to lead on AI governance by leveraging its social rights-focused policy tradition, emerging tech ecosystems, and absence of legacy systems.
According to a new commentary by Eduardo Levy Yeyati at the Brookings Institution, the region has the opportunity to craft smart AI regulation that is both inclusive and forward-looking, balancing innovation with rights protection.
Despite global momentum on AI rulemaking, Latin American regulatory efforts remain slow and fragmented, underlining the need for early action and regional cooperation.
The proposed framework recommends flexible, enforceable policies grounded in local realities, such as adapting credit algorithms for underbanked populations or embedding linguistic diversity in AI tools.
Governments are encouraged to create AI safety units, invest in public oversight, and support SMEs and open-source innovation to avoid monopolisation. Regulation should be iterative and participatory, using citizen consultations and advisory councils to ensure legitimacy and resilience through political shifts.
Regional harmonisation will be critical to avoid a patchwork of laws and promote Latin America’s role in global AI governance. Coordinated data standards, cross-border oversight, and shared technical protocols are essential for a robust, trustworthy ecosystem.
Rather than merely catching up, Latin America can become a global model for equitable and adaptive AI regulation tailored to the needs of developing economies.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At least 12 independent news websites in Jordan have been blocked by the authorities without any formal legal justification or opportunity for appeal. Rights groups have condemned the move as a serious violation of constitutional and international protections for freedom of expression.
The Jordanian Media Commission issued the directive on 14 May 2025, citing vague claims such as ‘spreading media poison’ and ‘targeting national symbols’, without providing evidence or naming the sites publicly.
The timing of the ban suggests it was a retaliatory act against investigative reports alleging profiteering by state institutions in humanitarian aid efforts to Gaza. Affected outlets were subjected to intimidation, and the blocks were imposed without judicial oversight or a transparent legal process.
Observers warn this sets a dangerous precedent, reflecting a broader pattern of repression under Jordan’s Cybercrime Law No. 17 of 2023, which grants sweeping powers to restrict online speech.
Civil society organisations call for the immediate reversal of the ban, transparency over its legal basis, and access to judicial remedies for affected platforms.
They urge a comprehensive review of the cybercrime law to align it with international human rights standards. Press freedom, they argue, is a pillar of democratic society and must not be sacrificed under the guise of combating disinformation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI will likely cause significant job disruption in the next five years, according to Demis Hassabis, CEO of Google DeepMind. Speaking on the Hard Fork podcast, Hassabis emphasised that while AI is set to displace specific jobs, it will also create new roles that are potentially more meaningful and engaging.
He urged younger generations to prepare for a rapidly evolving workforce shaped by advanced technologies. Hassabis stressed the importance of early adaptation, particularly for Generation Alpha, who he believes should embrace AI just as millennials did the internet and Gen Z did smartphones.
Hassabis also called on students to become ‘ninjas with AI,’ encouraging them to understand how these tools work and master them for future success. While he highlighted the potential of generative AI, such as Google’s new Veo 3 video generator unveiled at I/O 2025, Hassabis also reminded listeners that a solid foundation in STEM remains vital.
He noted that soft skills like creativity, resilience, and adaptability are equally essential—traits that will help young people thrive in a future defined by constant technological change. As AI becomes more deeply embedded in industries from education to entertainment, Hassabis’ message is clear – the next generation must balance technical knowledge with human ingenuity to stay ahead in tomorrow’s job market.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Pakistan has set aside 2,000 megawatts of electricity in a major push to power Bitcoin mining and AI data centres, marking the start of a wider national digital strategy.
Led by the Pakistan Crypto Council (PCC), a body under the Ministry of Finance, this initiative aims to monetise surplus energy instead of wasting it, while attracting foreign investment, creating jobs, and generating much-needed revenue.
Bilal Bin Saqib, CEO of the PCC, stated that with proper regulation and transparency, Pakistan can transform into a global powerhouse for crypto and AI.
By redirecting underused power capacity, particularly from plants operating below potential, Pakistan seeks to convert a longstanding liability into a high-value asset, earning foreign currency through digital services and even storing Bitcoin in a national wallet.
Global firms have already shown interest, following recent visits from international miners and data centre operators.
Pakistan’s location — bridging Asia, the Middle East, and Europe — coupled with low energy costs and ample land, positions it as a competitive alternative to regional tech hubs like India and Singapore.
The arrival of the Africa-2 subsea cable has further boosted digital connectivity and resilience, strengthening the case for domestic AI infrastructure.
It is just the beginning of a multi-stage rollout. Plans include using renewable energy sources like wind, solar, and hydropower, while tax incentives and strategic partnerships are expected to follow.
With over 40 million crypto users and increasing digital literacy, Pakistan aims to emerge not just as a destination for digital infrastructure but as a sovereign leader in Web3, AI, and blockchain innovation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!