Tencent launches scenario-based AI globally to boost industrial efficiency

Tencent has announced the global rollout of scenario-based AI capabilities to help enterprises accelerate industrial efficiency. At its 2025 Global Digital Ecosystem Summit, held in Shenzhen, the company introduced its Agent Development Platform 3.0 (ADP) via Tencent Cloud.

ADP enables businesses to generate autonomous AI agents that can be integrated into workflows, including customer service, marketing, inventory management, and research.

Tencent is also upgrading its internal models and infrastructure, such as ‘Agent Runtime’, to support stable, secure, and business-aligned agent deployment.

Other new tools include the SaaS+AI toolkit, which enhances productivity in office collaboration (for example, AI Minutes in Tencent Meetings) and knowledge management via Tencent LearnShare. A coding assistant called CodeBuddy is claimed to reduce developers’ coding time by 40 percent while increasing R&D efficiency by about 16 percent.

In line with its international expansion, Tencent Cloud announced that its overseas client base has doubled since last year and that it now operates across over 20 regions.

The rollout also includes open-source contributions: multilingual translation models, large multimodal models, and new Hunyuan 3D creative tools have been made available globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube launches new AI tools to simplify video creation

YouTube has introduced new AI-powered tools to make video creation more playful and effortless. The features include Veo 3 Fast, a video generation model from Google DeepMind, now integrated into YouTube Shorts.

Veo 3 Fast allows creators to generate videos with sound directly from their phones at 480p, all for free.

New Shorts capabilities let users add motion to photos, apply artistic styles, and insert objects into scenes with simple text prompts. These tools expand creative options and simplify content creation, with YouTube set to test them in the coming months.

The platform also launched Edit with AI, automatically transforming raw footage into a first draft with music, transitions, and voiceovers in English or Hindi. The feature helps creators quickly develop their videos, leaving more time for personalisation and refinement.

In addition, YouTube introduced Speech to Song, enabling users to remix dialogue from eligible videos into catchy soundtracks using Lyria 2, Google DeepMind’s AI music model. All AI-generated content includes SynthID watermarks and content labels to ensure transparency and proper attribution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hollywood studios take legal action against MiniMax for AI copyright infringement

Disney, Warner Bros. Discovery and NBCUniversal have filed a lawsuit in California against Chinese AI company MiniMax, accusing it of large-scale copyright infringement.

The studios allege that MiniMax’s Hailuo AI service generates unauthorised images and videos featuring well-known characters such as Darth Vader, marketing itself as a ‘Hollywood studio in your pocket’ instead of respecting copyright laws.

According to the complaint, MiniMax, reportedly worth $4 billion, ignored cease-and-desist requests and continues to profit from copyrighted works. The studios argue that the company could easily implement safeguards, pointing to existing controls that already block violent or explicit content.

MiniMax’s approach, as they claim, represents a serious threat to both creators and the broader film industry, which contributes hundreds of billions of dollars to the US economy.

Plaintiffs, including Disney’s Marvel and Lucasfilm units, Universal’s DreamWorks Animation and Warner Bros.’ DC Comics, are seeking statutory damages of up to $150,000 per infringed work or unspecified compensation.

They are also asking for an injunction to prevent MiniMax from continuing its alleged violations instead of simply paying damages.

The Motion Picture Association has backed the lawsuit, with its chairman Charles Rivkin warning that unchecked copyright infringement could undermine millions of jobs and the cultural value created by the American film industry.

MiniMax, based in Shanghai, has not responded publicly to the claims but has previously described itself as a global AI foundation model company with over 157 million users worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Army puts cybersecurity at the heart of transformation

Cybersecurity is a critical element of the US Army’s ongoing transformation and of wider national efforts to safeguard critical infrastructure, according to Brandon Pugh, Principal Cyber Adviser to the Secretary of the Army. Speaking at the Billington CyberSecurity Summit on 11 September, Pugh explained that the Army’s Continuous Transformation initiative is intended to deliver advanced technologies to soldiers more rapidly, ensuring readiness for operational environments where cybersecurity underpins every aspect of activity, from base operations to mobilisation.

Pugh took part in the panel where he emphasised that defending the homeland remains a central priority, with the Army directly affected by vulnerabilities in privately owned critical infrastructure such as energy and transport networks. He referred to research conducted by the Army Cyber Institute at the US Military Academy at West Point, which analyses how weaknesses in infrastructure could undermine the Army’s ability to project forces in times of crisis or conflict.

The other panellists agreed that maintaining strong basic cyber hygiene is essential. Josh Salmanson, Vice President for the Defence Cyber Practice at Leidos, underlined the importance of measures such as timely patching, reducing vulnerabilities, and eliminating shared passwords, all of which help to reduce noise in networks and strengthen responses to evolving threats.

The discussion also considered the growing application of AI in cyber operations. Col. Ivan Kalabashkin, Deputy Head of Ukraine’s Security Services Cyber Division reported that Ukraine has faced more than 13,000 cyber incidents directed at government and critical infrastructure systems since the start of the full-scale war, noting that Russia has in recent months employed AI to scan for network vulnerabilities.

Pugh stated that the Army is actively examining how AI can be applied to enhance both defensive and potentially offensive cyber operations, pointing to significant ongoing work within Army Cyber Command and US Cyber Command.

Finally, Pugh highlighted the Army’s determination to accelerate the introduction of cyber capabilities, particularly from innovative companies offering specialist solutions. He stressed the importance of acquisition processes that enable soldiers to test new capabilities within weeks, in line with the Army’s broader drive to modernise how it procures, evaluates, and deploys technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Openbank adds cryptocurrency trading for German customers

Openbank, Grupo Santander’s fully digital bank, now allows customers in Germany to buy, sell, and hold major cryptocurrencies, including Bitcoin, Ether, Litecoin, Polygon, and Cardano.

The service integrates seamlessly with existing investments, removing the need to transfer funds to other platforms. It also provides the protection of MiCA regulations and the backing of Santander.

Competitive fees of 1.49% per trade apply, with no custody charges, and the service will soon be available to customers in Spain. Over the coming months, Openbank plans to expand its portfolio and introduce new features, such as direct conversion between different digital assets.

The launch strengthens Openbank’s investment offerings in Germany, complementing its Robo Advisor and thousands of stocks, funds, and ETFs. It also includes an AI-powered broker platform providing target prices for European and US stocks.

Grupo Santander emphasises that the new crypto trading service responds to customer demand while broadening the bank’s range of innovative, technology-driven investment products.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Indonesia’s sovereign wealth fund INA targets data centres and AI in healthcare

The Indonesia Investment Authority (INA), the country’s sovereign wealth fund, is sharpening its focus on digital infrastructure, healthcare and renewable energy as it seeks to attract foreign partners and strengthen national development.

The fund, created in 2021 with $5 billion in state capital, now manages assets worth around $10 billion and is expanding its scope beyond equity into hybrid capital and private credit.

Chief investment officer Christopher Ganis said data centres and supporting infrastructure, such as sub-sea cables, were key priorities as the government emphasises data independence and resilience.

INA has already teamed up with Singapore-based Granite Asia to invest over $1.2 billion in Indonesia’s technology and AI ecosystem, including a new data centre campus in Batam. Ganis added that AI would be applied first in healthcare instead of rushing into broader adoption.

Renewables also remain central to INA’s strategy, with its partnership alongside Abu Dhabi’s Masdar Clean Energy in Pertamina Geothermal Energy cited as a strong performer.

Ganis said Asia’s reliance on bank financing highlights the need for INA’s support in cross-border growth, since domestic banks cannot always facilitate overseas expansion.

Despite growing global ambitions, INA will prioritise projects directly linked to Indonesia. Ganis stressed that it must deliver benefits at home instead of directing capital into ventures without a clear link to the country’s future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

First quantum-AI data centre launched in New York City

Oxford Quantum Circuits (OQC) and Digital Realty have launched the first quantum-AI data centre in New York City at the JFK10 facility, powered by Nvidia GH200 Grace Hopper Superchips. The project combines superconducting quantum computers with AI supercomputing under one roof.

OQC’s GENESIS quantum computer is the first to be deployed in a New York data centre, designed to support hybrid workloads and enterprise adoption. Future GENESIS systems will ship with Nvidia accelerated computing and CUDA-Q integration as standard.

OQC CEO Gerald Mullally said the centre will drive the AI revolution securely and at scale, strengthening the UKUS technology alliance. Digital Realty CEO Andy Power called it a milestone for making quantum-AI accessible to enterprises and governments.

UK Science Minister Patrick Vallance highlighted the £212 billion economic potential of quantum by 2045, citing applications from drug discovery to clean energy. He said the launch puts British innovation at the heart of next-generation computing.

The centre, embedded in Digital Realty’s PlatformDIGITAL, will support applications in finance, security, and AI, including quantum machine learning and accelerated model training. OQC Chair Jack Boyer said it demonstrates UK–US collaboration in leading frontier technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

When language models fabricate truth: AI hallucinations and the limits of trust

AI has come far from rule-based systems and chatbots with preset answers. Large language models (LLMs), powered by vast amounts of data and statistical prediction, now generate text that can mirror human speech, mimic tone, and simulate expertise, but also produce convincing hallucinations that blur the line between fact and fiction.

From summarising policy to drafting contracts and responding to customer queries, these tools are becoming embedded across industries, governments, and education systems.

As their capabilities grow, so does the underlying problem that many still underestimate. These systems frequently produce convincing but entirely false information. Often referred to as ‘AI hallucinations‘, such factual distortions pose significant risks, especially when users trust outputs without questioning their validity.

Once deployed in high-stakes environments, from courts to political arenas, the line between generative power and generative failure becomes more challenging to detect and more dangerous to ignore.

When facts blur into fiction

AI hallucinations are not simply errors. They are confident statements presented as facts, even based on probability. Language models are designed to generate the most likely next word, not the correct one. That difference may be subtle in casual settings, but it becomes critical in fields like law, healthcare, or media.

One such example emerged when an AI chatbot misrepresented political programmes in the Netherlands, falsely attributing policy statements about Ukraine to the wrong party. However, this error spread misinformation and triggered official concern. The chatbot had no malicious intent, yet its hallucination shaped public discourse.

Mistakes like these often pass unnoticed because the tone feels authoritative. The model sounds right, and that is the danger.

When language models hallucinate, they sound credible, and users believe them. Discover why this is a growing risk.
Image via AI / ChatGPT

Why large language models hallucinate

Hallucinations are not bugs in the system. They are a direct consequence of the way how language models are built. Trained to complete text based on patterns, these systems have no fundamental understanding of the world, no memory of ‘truth’, and no internal model of fact.

A recent study reveals that even the way models are tested may contribute to hallucinations. Instead of rewarding caution or encouraging honesty, current evaluation frameworks favour responses that appear complete and confident, even when inaccurate. The more assertive the lie, the better it scores.

Alongside these structural flaws, real-world use cases reveal additional causes. Here are the most frequent causes of AI hallucinations:

  • Vague or ambiguous prompts
  • Lack of specificity forces the model to fill gaps with speculative content that may not be grounded in real facts.
  • Overly long conversations
  • As prompt history grows, especially without proper context management, models lose track and invent plausible answers.
  • Missing knowledge
  • When a model lacks reliable training data on a topic, it may produce content that appears accurate but is fabricated.
  • Leading or biassed prompts
  • Inputs that suggest a specific answer can nudge the model into confirming something untrue to match expectations.
  • Interrupted context due to connection issues
  • Especially with browser-based tools, a brief loss of session data can cause the model to generate off-track or contradictory outputs.
  • Over-optimisation for confidence
  • Most systems are trained to sound fluent and assertive. Saying ‘I don’t know’ is statistically rare unless explicitly prompted.

Each of these cases stems from a single truth. Language models are not fact-checkers. They are word predictors. And prediction, without verification, invites fabrication.

The cost of trust in flawed systems

Hallucinations become more dangerous not when they happen, but when they are believed.

Users may not question the output of an AI system if it appears polished, grammatically sound, and well-structured. This perceived credibility can lead to real consequences, including legal documents based on invented cases, medical advice referencing non-existent studies, or voters misled by political misinformation.

In low-stakes scenarios, hallucinations may lead to minor confusion. In high-stakes contexts, the same dynamic can result in public harm or institutional breakdown. Once generated, an AI hallucination can be amplified across platforms, indexed by search engines, and cited in real documents. At that point, it becomes a synthetic fact.

Can hallucinations be fixed?

Some efforts are underway to reduce hallucination rates. Retrieval-augmented generation (RAG), fine-tuning on verified datasets, and human-in-the-loop moderation can improve reliability. Still, no method has eliminated hallucinations.

The deeper issue is how language models are rewarded, trained, and deployed. Without institutional norms prioritising verifiability and technical mechanisms that can flag uncertainty, hallucinations will remain embedded in the system.

Even the most capable AI models must include humility. The ability to say ‘I don’t know’ is still one of the rarest responses in the current landscape.

How AI hallucinations mislead users and shape decisions
Image via AI / ChatGPT

Hallucinations won’t go away. Responsibility must step in.

Language models are not truth machines. They are prediction engines trained on vast and often messy human data. Their brilliance lies in fluency, but fluency can easily mask fabrication.

As AI tools become part of our legal, political, and civic infrastructure, institutions and users must approach them critically. Trust in AI should never be passive. And without active human oversight, hallucinations may not just mislead; they may define the outcome.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI sets new rules for teen safety in AI use

OpenAI has outlined a new framework for balancing safety, privacy and freedom in its AI systems, with a strong focus on teenagers.

The company stressed that conversations with AI often involve sensitive personal information, which should be treated with the same level of protection as communications with doctors or lawyers.

At the same time, it aims to grant adult users broad freedom to direct AI responses, provided safety boundaries are respected.

The situation changes for younger users. Teenagers are seen as requiring stricter safeguards, with safety taking priority over privacy and freedom. OpenAI is developing age-prediction tools to identify users under 18, and where uncertainty exists, it will assume the user is a teenager.

In some regions, identity verification may also be required to confirm age, a step the company admits reduces privacy but argues is essential for protecting minors.

Teen users will face tighter restrictions on certain types of content. ChatGPT will be trained not to engage in flirtatious exchanges, and sensitive issues such as self-harm will be carefully managed.

If signs of suicidal thoughts appear, the company says it will first try to alert parents. Where there is imminent risk and parents cannot be reached, OpenAI is prepared to notify the authorities.

The new approach raises questions about privacy trade-offs, the accuracy of age prediction, and the handling of false classifications.

Critics may also question whether restrictions on creative content hinder expression. OpenAI acknowledges these tensions but argues the risks faced by young people online require stronger protections.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI will kill middle-ground media, but raw content will thrive

Advertising is heading for a split future. By 2030, brands will run hyper-personalised AI campaigns or embrace raw human storytelling. Everything in between will vanish.

AI-driven advertising will go far beyond text-to-image gimmicks. These adaptive systems will combine social trends, search habits, and first-party data to create millions of real-time ad variations.

The opposite approach will lean into imperfection, featuring unpolished TikToks, founder-shot iPhone videos, and authentic and alive content. Audiences reward authenticity over carefully scripted, generic campaigns.

Mid-tier, polished, forgettable, creative work will be the first to fade away. AI can replicate it instantly, and audiences will scroll past it without noticing.

Marketers must now pick a side: feed AI with data and scale personalisation, or double down on community-driven, imperfect storytelling. The middle won’t survive.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!