Huawei’s dominance in AI sparks national security debate in Indonesia

Indonesia is urgently working to secure strategic autonomy in AI as Huawei rapidly expands its presence in the country’s critical infrastructure. Officials are under pressure to swiftly adopt enforceable safeguards to balance innovation and security. The aim is to prevent critical vulnerabilities from emerging.

Huawei’s telecom dominance extends into AI through 5G infrastructure, network tools, and AI cloud centres. Partnerships with local telecoms, along with government engagement, position the company at the heart of Indonesia’s digital landscape.

Experts warn that concentrating AI under one foreign supplier could compromise data sovereignty and heighten security risks. Current governance relies on two non-binding guidelines, providing no enforceable oversight or urgent baseline for protecting critical infrastructure.

The withdrawal of Malaysia from Huawei’s AI projects highlights urgent geopolitical stakes. Indonesia’s fragmented approach, with ministries acting separately, risks producing conflicting policies and leaving immediate gaps in security oversight.

Analysts suggest a robust framework should require supply chain transparency, disclosure of system origins, and adherence to data protection laws. Indonesia must act swiftly to establish these rules and coordinate policy across ministries to safeguard its infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Small language models gain ground in AI translation

Small language models are emerging as a serious challenger to large, general-purpose AI in translation, offering faster turnaround, lower costs, and greater accuracy for specific industries and language pairs.

Straker, an ASX-listed language technology firm, claims its Tiri model family can outperform larger systems by focusing on domain-specific understanding and terminology rather than broad coverage.

Tiri delivers higher contextual accuracy by training on carefully curated translation memories and sector-specific data, cutting the need for expensive human post-editing. The models also consume less computing power, benefiting finance, healthcare, and law industries.

Straker integrates human feedback directly into its workflows to ensure ongoing improvements and maintain client trust.

The company is expanding its technology into enterprise automation by integrating with the AI workflow platform n8n.

It adds Straker’s Verify tool to a network of over 230,000 users, allowing automated translation checks, real-time quality scores, and seamless escalation to human linguists. Further integrations with platforms like Microsoft Teams are planned.

Straker recently reported record profitability and secured a price target upgrade from broker Ord Minnett. The firm believes the future of AI translation lies not in scale but in specialised models that deliver translations that are both fluent and accurate in context.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI plays major role in crypto journalism but cannot replace humans

A recent report by Chainstory analysed 80,000 crypto news articles across five leading sites, revealing that 48% of them disclosed some form of AI use during 2025. Investing.com and The Defiant led in AI-generated or assisted content among the sites studied.

The extent of AI use across the broader industry may vary, as disclosure practices differ.

Editors interviewed for the report highlighted AI’s strengths and limitations. While AI proves valuable for research tasks such as summarising reports and extracting data, its storytelling ability remains weak.

Articles entirely written by AI often lack a genuine human tone, which can feel unnatural to audiences. One editor noted that readers can usually tell when content isn’t authored by a person, regardless of disclosure.

Afik Rechler, co-CEO of Chainstory, stated that AI is now an integral part of crypto journalism but has not replaced human reporters. He emphasised balancing AI help with human insight to keep readers’ trust, since current AI can’t manage complex, nuanced stories.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft enters AI-powered 3D modelling race with Copilot 3D

Microsoft has launched Copilot 3D, an AI-powered tool that transforms 2D images into realistic 3D models without requiring specialist skills. Available through Copilot Labs, it aims to make 3D creation faster, more accessible, and more intuitive for global users signed in with a Microsoft account.

The tool supports only image-to-3D conversion, with no text-to-3D capability. Users can upload images up to 10 MB, generate a model, and download it in GLB format. Microsoft states uploaded images are used solely for model generation and are not retained for training or personalisation.

Copilot 3D is designed for applications that range from prototyping and creative exploration to interactive learning, thereby reducing the steep learning curve associated with conventional 3D programs. It can be used on PCs or mobile browsers; however, Microsoft recommends a desktop experience for optimal results.

Tech rivals are also advancing similar tools. Apple’s Matrix3D model can build 3D scenes from images, while Meta’s 3D Gen AI system creates 3D assets from text or applies textures to existing models. Nvidia’s NeRF technology generates realistic 3D scenes from multiple 2D images.

The release underscores growing competition in AI-driven 3D design, as companies race to make advanced modelling tools more accessible to everyday creators.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Why AI coding tools may follow the path of past tech revolutions

In mid-2025, the debate over AI in programming mirrors historic resistance to earlier breakthroughs in computing. Critics say current AI coding tools often slow developers and create overconfidence, while supporters argue they will eventually transform software creation.

The Register compares this moment to the 1950s, when Grace Hopper faced opposition to high-level programming languages. Similar scepticism greeted technologies such as C, Java, and intermediate representation, which later became integral to modern computing.

Current AI tools face limits in resources, business models, and capability. Yet, as past trends show, these constraints may fade as hardware, training, and developer practices improve. Advocates believe AI will shift human effort toward design and problem definition rather than manual coding.

For now, adoption remains a mixed blessing, with performance issues and unrealistic expectations. But history suggests that removing barriers between ideas and results catalyses lasting change.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google works to curb Gemini’s endless self-criticism

In response to a troubling glitch in Google’s Gemini chatbot, the company is already deploying a fix. Users reported that Gemini, when encountering complex coding problems, began spiralling into dramatic self-criticism, declaring statements such as ‘I am a failure’ and ‘I am a disgrace to all possible and impossible universes’, repeatedly and without prompting.

Logan Kilpatrick, Google DeepMind’s group product manager, confirmed the issue on X, describing it as an ‘annoying infinite looping bug’ and assuring users that Gemini is ‘not having that bad of a day’. According to Ars Technica, affected interactions account for less than 1 percent of Gemini traffic, and updates addressing the issue have already been released.

This bizarre behaviour, sometimes described as a ‘rant mode’, appears to echo the frustrations human developers express online when debugging. Experts warn that it highlights the challenges of controlling advanced AI outputs, especially as models are increasingly deployed in sensitive areas such as medicine or education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek’s efficiency forces OpenAI to rethink closed AI model strategy

OpenAI has released reasoning-focused open-weight models in a strategic response to China’s surging AI ecosystem, led by DeepSeek’s disruptive efficiency. Unlike earlier coverage, the shift is framed not merely as competitive posturing but as a deeper recognition of shifting innovation philosophies.

DeepSeek’s rise stems from maximizing limited resources under the US’s export restrictions, proving that top-tier AI doesn’t require massive chip clusters. The agility has emboldened the open-source AI sector in China, where over 10 labs now rival those in the US, fundamentally reshaping competitive dynamics.

OpenAI’s ‘gpt-oss’ models, which reveal numerical parameters for customization, mark a departure from its traditional closed approach. Industry watchers see this as a hybrid play, retaining proprietary strengths while embracing openness to appeal to global developers.

The implications stretch beyond technology into geopolitics. US export controls may have inadvertently fueled Chinese AI innovation, with DeepSeek’s self-reliant architecture now serving as a proof point for resilience. DeepSeek’s achievement challenges the US’s historically resource-intensive approach to AI.

AI rivalry may spur collaboration or escalate competition. DeepSeek advances models like DeepSeek-MoE, while OpenAI strikes a balance between openness and monetization. Global AI dynamics shift, raising both technological and philosophical stakes.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Altman warns of harmful AI use after model backlash

OpenAI chief executive Sam Altman has warned that many ChatGPT users are engaging with AI in self-destructive ways. His comments follow backlash over the sudden discontinuation of GPT-4o and other older models, which he admitted was a mistake.

Altman said that users form powerful attachments to specific AI models, and while most can distinguish between reality and fiction, a small minority cannot. He stressed OpenAI’s responsibility to manage the risks for those in mentally fragile states.

Using ChatGPT as a therapist or life coach was not his concern, as many people already benefit from it. Instead, he worried about cases where advice subtly undermines a user’s long-term well-being.

The model removals triggered a huge social-media outcry, with complaints that newer versions offered shorter, less emotionally rich responses. OpenAI has since restored GPT-4o for Plus subscribers, while free users will only have access to GPT-5.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Investors adapt as AI reshapes US market 

Firms such as Wix.com, Shutterstock, and Adobe have been labelled high risk by Bank of America, with stock declines far outpacing the broader market. The shift stems from fears that AI will replace services like graphic design and data analysis, delivering them faster and cheaper.

Some analysts say the impact, once expected over five years, may unfold in just two.

The disruption is not limited to creative industries. Gartner saw a record share drop after cutting its revenue forecast, with some attributing the slump to cheaper AI-powered alternatives.

Meanwhile, major tech firms, including Microsoft, Meta, Alphabet, and Amazon, are expected to invest around $350 billion this year, nearly 50% more than last year, to expand AI infrastructure.

Despite the pressure, certain businesses are adapting successfully. Duolingo has doubled its share price over the past year by integrating AI into its language-learning tools, though questions remain over the long-term sustainability of such gains.

As the gap between AI-powered growth and industry decline widens, markets are bracing for further upheaval.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools risk gender bias in women’s health care

AI tools used by over half of England’s local councils may be downplaying women’s physical and mental health issues. Research from LSE found Google’s AI model, Gemma, used harsher terms like ‘disabled’ and ‘complex’ more often for men than women with similar care needs.

The LSE study analysed thousands of AI-generated summaries from adult social care case notes. Researchers swapped only the patient’s gender to reveal disparities.

One example showed an 84-year-old man described as having ‘complex medical history’ and ‘poor mobility’, while the same notes for a woman suggested she was ‘independent’ despite limitations.

Among the models tested, Google’s Gemma showed the most pronounced gender bias, while Meta’s Llama 3 used gender-neutral language.

Lead researcher Dr Sam Rickman warned that biassed AI tools risk creating unequal care provision. Local authorities increasingly rely on such systems to ease social workers’ workloads.

Calls have grown for greater transparency, mandatory bias testing, and legal oversight to ensure fairness in long-term care.

Google said the Gemma model is now in its third generation and under review, though it is not intended for medical use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot