Google has unveiled the Agent Payments Protocol (AP2), a new system enabling AI applications to send and receive payments, including stablecoins pegged to traditional currencies.
Developed with Coinbase, the Ethereum Foundation, and over 60 other finance and technology firms, AP2 aims to standardise transactions between AI agents and merchants.
The protocol builds on Google’s earlier Agent2Agent framework, extending it to financial interactions. AP2 supports credit and debit cards, bank transfers, and stablecoins, providing a secure and compliant foundation for automated payments.
By introducing a shared language for AI-led transactions, the system addresses risks linked to authorisation, authenticity, and accountability without human intervention.
The project reflects growing interest in stablecoins, whose circulation recently rose to $289 billion from $205 billion at the start of the year. Integrating stablecoins into AI could change how automated systems manage payments, from daily purchases to complex financial tasks.
Google and its collaborators emphasise AP2’s goal of interoperability across industries, offering flexibility, compliance, and scalability. The initiative makes digital money central to AI, signalling a shift in automated financial transactions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
YouTube has unveiled a new suite of AI tools designed to enhance the creation of Shorts, with its headline innovation being Veo 3 Fast, a streamlined version of Google DeepMind’s video model.
A system that can generate 480p clips with sound almost instantly, marking the first time audio has been added to Veo-generated Shorts. It is already being rolled out in the US, the UK, Canada, Australia and New Zealand, with other regions to follow instead of a limited release.
The platform also introduced several advanced editing features, such as motion transfer from video to still images, text-based styling, object insertion and Speech to Song Remixing, which converts spoken dialogue into music through DeepMind’s Lyria 2 model.
Testing will begin in the US before global expansion.
Another innovation, Edit with AI, automatically assembles raw footage into a rough cut complete with transitions, music and interactive voiceovers. YouTube confirmed the tool is in trials and will launch in select markets within weeks instead of years.
All AI-generated Shorts will display labels and watermarks to maintain transparency, as YouTube pushes to expand creator adoption and boost Shorts’ growth as a rival to TikTok and Instagram Reels.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
ADP enables businesses to generate autonomous AI agents that can be integrated into workflows, including customer service, marketing, inventory management, and research.
Tencent is also upgrading its internal models and infrastructure, such as ‘Agent Runtime’, to support stable, secure, and business-aligned agent deployment.
Other new tools include the SaaS+AI toolkit, which enhances productivity in office collaboration (for example, AI Minutes in Tencent Meetings) and knowledge management via Tencent LearnShare. A coding assistant called CodeBuddy is claimed to reduce developers’ coding time by 40 percent while increasing R&D efficiency by about 16 percent.
In line with its international expansion, Tencent Cloud announced that its overseas client base has doubled since last year and that it now operates across over 20 regions.
The rollout also includes open-source contributions: multilingual translation models, large multimodal models, and new Hunyuan 3D creative tools have been made available globally.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Cybersecurity is a critical element of the US Army’s ongoing transformation and of wider national efforts to safeguard critical infrastructure, according to Brandon Pugh, Principal Cyber Adviser to the Secretary of the Army. Speaking at the Billington CyberSecurity Summit on 11 September, Pugh explained that the Army’s Continuous Transformation initiative is intended to deliver advanced technologies to soldiers more rapidly, ensuring readiness for operational environments where cybersecurity underpins every aspect of activity, from base operations to mobilisation.
Pugh took part in the panel where he emphasised that defending the homeland remains a central priority, with the Army directly affected by vulnerabilities in privately owned critical infrastructure such as energy and transport networks. He referred to research conducted by the Army Cyber Institute at the US Military Academy at West Point, which analyses how weaknesses in infrastructure could undermine the Army’s ability to project forces in times of crisis or conflict.
The other panellists agreed that maintaining strong basic cyber hygiene is essential. Josh Salmanson, Vice President for the Defence Cyber Practice at Leidos, underlined the importance of measures such as timely patching, reducing vulnerabilities, and eliminating shared passwords, all of which help to reduce noise in networks and strengthen responses to evolving threats.
The discussion also considered the growing application of AI in cyber operations. Col. Ivan Kalabashkin, Deputy Head of Ukraine’s Security Services Cyber Division reported that Ukraine has faced more than 13,000 cyber incidents directed at government and critical infrastructure systems since the start of the full-scale war, noting that Russia has in recent months employed AI to scan for network vulnerabilities.
Pugh stated that the Army is actively examining how AI can be applied to enhance both defensive and potentially offensive cyber operations, pointing to significant ongoing work within Army Cyber Command and US Cyber Command.
Finally, Pugh highlighted the Army’s determination to accelerate the introduction of cyber capabilities, particularly from innovative companies offering specialist solutions. He stressed the importance of acquisition processes that enable soldiers to test new capabilities within weeks, in line with the Army’s broader drive to modernise how it procures, evaluates, and deploys technology.
Would you like to learn more aboutAI, tech and digital diplomacy?If so, ask our Diplo chatbot!
The Indonesia Investment Authority (INA), the country’s sovereign wealth fund, is sharpening its focus on digital infrastructure, healthcare and renewable energy as it seeks to attract foreign partners and strengthen national development.
The fund, created in 2021 with $5 billion in state capital, now manages assets worth around $10 billion and is expanding its scope beyond equity into hybrid capital and private credit.
Chief investment officer Christopher Ganis said data centres and supporting infrastructure, such as sub-sea cables, were key priorities as the government emphasises data independence and resilience.
INA has already teamed up with Singapore-based Granite Asia to invest over $1.2 billion in Indonesia’s technology and AI ecosystem, including a new data centre campus in Batam. Ganis added that AI would be applied first in healthcare instead of rushing into broader adoption.
Renewables also remain central to INA’s strategy, with its partnership alongside Abu Dhabi’s Masdar Clean Energy in Pertamina Geothermal Energy cited as a strong performer.
Ganis said Asia’s reliance on bank financing highlights the need for INA’s support in cross-border growth, since domestic banks cannot always facilitate overseas expansion.
Despite growing global ambitions, INA will prioritise projects directly linked to Indonesia. Ganis stressed that it must deliver benefits at home instead of directing capital into ventures without a clear link to the country’s future.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI has come far from rule-based systems and chatbots with preset answers. Large language models (LLMs), powered by vast amounts of data and statistical prediction, now generate text that can mirror human speech, mimic tone, and simulate expertise, but also produce convincing hallucinations that blur the line between fact and fiction.
From summarising policy to drafting contracts and responding to customer queries, these tools are becoming embedded across industries, governments, and education systems.
As their capabilities grow, so does the underlying problem that many still underestimate. These systems frequently produce convincing but entirely false information. Often referred to as ‘AI hallucinations‘, such factual distortions pose significant risks, especially when users trust outputs without questioning their validity.
Once deployed in high-stakes environments, from courts to political arenas, the line between generative power and generative failure becomes more challenging to detect and more dangerous to ignore.
When facts blur into fiction
AI hallucinations are not simply errors. They are confident statements presented as facts, even based on probability. Language models are designed to generate the most likely next word, not the correct one. That difference may be subtle in casual settings, but it becomes critical in fields like law, healthcare, or media.
One such example emerged when an AI chatbot misrepresented political programmes in the Netherlands, falsely attributing policy statements about Ukraine to the wrong party. However, this error spread misinformation and triggered official concern. The chatbot had no malicious intent, yet its hallucination shaped public discourse.
Mistakes like these often pass unnoticed because the tone feels authoritative. The model sounds right, and that is the danger.
Image via AI / ChatGPT
Why large language models hallucinate
Hallucinations are not bugs in the system. They are a direct consequence of the way how language models are built. Trained to complete text based on patterns, these systems have no fundamental understanding of the world, no memory of ‘truth’, and no internal model of fact.
A recent study reveals that even the way models are tested may contribute to hallucinations. Instead of rewarding caution or encouraging honesty, current evaluation frameworks favour responses that appear complete and confident, even when inaccurate. The more assertive the lie, the better it scores.
Alongside these structural flaws, real-world use cases reveal additional causes. Here are the most frequent causes of AI hallucinations:
Vague or ambiguous prompts
Lack of specificity forces the model to fill gaps with speculative content that may not be grounded in real facts.
Overly long conversations
As prompt history grows, especially without proper context management, models lose track and invent plausible answers.
Missing knowledge
When a model lacks reliable training data on a topic, it may produce content that appears accurate but is fabricated.
Leading or biassed prompts
Inputs that suggest a specific answer can nudge the model into confirming something untrue to match expectations.
Interrupted context due to connection issues
Especially with browser-based tools, a brief loss of session data can cause the model to generate off-track or contradictory outputs.
Over-optimisation for confidence
Most systems are trained to sound fluent and assertive. Saying ‘I don’t know’ is statistically rare unless explicitly prompted.
Each of these cases stems from a single truth. Language models are not fact-checkers. They are word predictors. And prediction, without verification, invites fabrication.
The cost of trust in flawed systems
Hallucinations become more dangerous not when they happen, but when they are believed.
Users may not question the output of an AI system if it appears polished, grammatically sound, and well-structured. This perceived credibility can lead to real consequences, including legal documents based on invented cases, medical advice referencing non-existent studies, or voters misled by political misinformation.
In low-stakes scenarios, hallucinations may lead to minor confusion. In high-stakes contexts, the same dynamic can result in public harm or institutional breakdown. Once generated, an AI hallucination can be amplified across platforms, indexed by search engines, and cited in real documents. At that point, it becomes a synthetic fact.
Can hallucinations be fixed?
Some efforts are underway to reduce hallucination rates. Retrieval-augmented generation (RAG), fine-tuning on verified datasets, and human-in-the-loop moderation can improve reliability. Still, no method has eliminated hallucinations.
The deeper issue is how language models are rewarded, trained, and deployed. Without institutional norms prioritising verifiability and technical mechanisms that can flag uncertainty, hallucinations will remain embedded in the system.
Even the most capable AI models must include humility. The ability to say ‘I don’t know’ is still one of the rarest responses in the current landscape.
Image via AI / ChatGPT
Hallucinations won’t go away. Responsibility must step in.
Language models are not truth machines. They are prediction engines trained on vast and often messy human data. Their brilliance lies in fluency, but fluency can easily mask fabrication.
As AI tools become part of our legal, political, and civic infrastructure, institutions and users must approach them critically. Trust in AI should never be passive. And without active human oversight, hallucinations may not just mislead; they may define the outcome.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The company stressed that conversations with AI often involve sensitive personal information, which should be treated with the same level of protection as communications with doctors or lawyers.
At the same time, it aims to grant adult users broad freedom to direct AI responses, provided safety boundaries are respected.
The situation changes for younger users. Teenagers are seen as requiring stricter safeguards, with safety taking priority over privacy and freedom. OpenAI is developing age-prediction tools to identify users under 18, and where uncertainty exists, it will assume the user is a teenager.
Teen users will face tighter restrictions on certain types of content. ChatGPT will be trained not to engage in flirtatious exchanges, and sensitive issues such as self-harm will be carefully managed.
If signs of suicidal thoughts appear, the company says it will first try to alert parents. Where there is imminent risk and parents cannot be reached, OpenAI is prepared to notify the authorities.
The new approach raises questions about privacy trade-offs, the accuracy of age prediction, and the handling of false classifications.
Critics may also question whether restrictions on creative content hinder expression. OpenAI acknowledges these tensions but argues the risks faced by young people online require stronger protections.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta is set to unveil its first pair of smart glasses with a built-in display at its annual Connect conference in California.
Expected to be called Celeste, the glasses will debut at around $800 and feature a small digital display in the right lens for notifications. Analysts say the higher price point could limit adoption compared with Meta’s Ray-Ban line, which starts at $299.
Alongside the new glasses, Meta is also expected to launch its first wristband for hand-gesture control and an updated Ray-Ban line with better cameras, battery life and AI features. Developers will gain access to a new software kit to build device apps.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Babbel’s chief executive, Tim Allen, said the aim is not instant fluency but helping learners move from first words to confident conversations.
Called Babbel Speak, the AI feature guides users through 28 real-life scenarios, such as ordering coffee or describing the weather. It provides personalised feedback and uses a calming design with animations to ease anxiety while learning.
The trainer is available in open beta on the App Store and Play Store for English, Spanish, French, Italian, and German.
Subscribers can try it as part of the standard plans of Babbel, which start at $107.40 per year, with a lifetime option also offered.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A small Japanese political party has announced plans to install an AI system as its leader following its founder’s resignation.
The Path to Rebirth party was created in January by Shinji Ishimaru, a former mayor who rose to prominence after placing second in the 2024 Tokyo gubernatorial election. He stepped down after the party failed to secure seats in this year’s upper house elections.
The AI would oversee internal decisions such as distributing resources, but would not dictate members’ political activities. Okumura, who won a contest to succeed Ishimaru, will act as the nominal leader while supporting the development of the AI.
Despite attracting media attention, the party has faced heavy electoral defeats, with all 42 of its candidates losing in the June Tokyo assembly election and all 10 of its upper house candidates defeated in July.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!