Musk threatens legal action against Apple over AI App rankings

Elon Musk has announced plans to sue Apple, accusing the company of unfairly favouring OpenAI’s ChatGPT over his xAI app Grok on the App Store.

Musk claims that Apple’s ranking practices make it impossible for any AI app except OpenAI’s to reach the top spot, calling this behaviour an ‘unequivocal antitrust violation’. ChatGPT holds the number one position on Apple’s App Store, while Grok ranks fifth.

Musk expressed frustration on social media, questioning why his X app, which he describes as ‘the number one news app in the world,’ has not received higher placement. He suggested that Apple’s ranking decisions might be politically motivated.

The dispute highlights growing tensions as AI companies compete for prominence on major platforms.

Apple and Musk’s xAI have not responded yet to requests for comment.

The controversy unfolds amid increasing scrutiny of App Store policies and their impact on competition, especially within the fast-evolving AI sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk and OpenAI CEO Altman clash over Apple and X

After Elon Musk accused Apple of favouring OpenAI’s ChatGPT over other AI applications on the App Store, there was a strong response from OpenAI CEO Sam Altman.

Altman alleged that Musk manipulates the social media platform X for his benefit, targeting competitors and critics. The exchange adds to their history of public disagreements since Musk left OpenAI’s board in 2018.

Musk’s claim centres on Apple’s refusal to list X or Grok (XAI’s AI app) in the App Store’s ‘Must have’ section, despite X being the top news app worldwide and Grok ranking fifth.

Although Musk has not provided evidence for antitrust violations, a recent US court ruling found Apple in contempt for restricting App Store competition. The EU also fined Apple €500 million earlier this year over commercial restrictions on app developers.

OpenAI’s ChatGPT currently leads the App Store’s ‘Top Free Apps’ list for iPhones in the US, while Grok holds the fifth spot. Musk’s accusations highlight ongoing tensions in the AI industry as big tech companies battle for app visibility and market dominance.

The situation emphasises how regulatory scrutiny and legal challenges shape competition within the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Huawei’s dominance in AI sparks national security debate in Indonesia

Indonesia is urgently working to secure strategic autonomy in AI as Huawei rapidly expands its presence in the country’s critical infrastructure. Officials are under pressure to swiftly adopt enforceable safeguards to balance innovation and security. The aim is to prevent critical vulnerabilities from emerging.

Huawei’s telecom dominance extends into AI through 5G infrastructure, network tools, and AI cloud centres. Partnerships with local telecoms, along with government engagement, position the company at the heart of Indonesia’s digital landscape.

Experts warn that concentrating AI under one foreign supplier could compromise data sovereignty and heighten security risks. Current governance relies on two non-binding guidelines, providing no enforceable oversight or urgent baseline for protecting critical infrastructure.

The withdrawal of Malaysia from Huawei’s AI projects highlights urgent geopolitical stakes. Indonesia’s fragmented approach, with ministries acting separately, risks producing conflicting policies and leaving immediate gaps in security oversight.

Analysts suggest a robust framework should require supply chain transparency, disclosure of system origins, and adherence to data protection laws. Indonesia must act swiftly to establish these rules and coordinate policy across ministries to safeguard its infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Small language models gain ground in AI translation

Small language models are emerging as a serious challenger to large, general-purpose AI in translation, offering faster turnaround, lower costs, and greater accuracy for specific industries and language pairs.

Straker, an ASX-listed language technology firm, claims its Tiri model family can outperform larger systems by focusing on domain-specific understanding and terminology rather than broad coverage.

Tiri delivers higher contextual accuracy by training on carefully curated translation memories and sector-specific data, cutting the need for expensive human post-editing. The models also consume less computing power, benefiting finance, healthcare, and law industries.

Straker integrates human feedback directly into its workflows to ensure ongoing improvements and maintain client trust.

The company is expanding its technology into enterprise automation by integrating with the AI workflow platform n8n.

It adds Straker’s Verify tool to a network of over 230,000 users, allowing automated translation checks, real-time quality scores, and seamless escalation to human linguists. Further integrations with platforms like Microsoft Teams are planned.

Straker recently reported record profitability and secured a price target upgrade from broker Ord Minnett. The firm believes the future of AI translation lies not in scale but in specialised models that deliver translations that are both fluent and accurate in context.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI plays major role in crypto journalism but cannot replace humans

A recent report by Chainstory analysed 80,000 crypto news articles across five leading sites, revealing that 48% of them disclosed some form of AI use during 2025. Investing.com and The Defiant led in AI-generated or assisted content among the sites studied.

The extent of AI use across the broader industry may vary, as disclosure practices differ.

Editors interviewed for the report highlighted AI’s strengths and limitations. While AI proves valuable for research tasks such as summarising reports and extracting data, its storytelling ability remains weak.

Articles entirely written by AI often lack a genuine human tone, which can feel unnatural to audiences. One editor noted that readers can usually tell when content isn’t authored by a person, regardless of disclosure.

Afik Rechler, co-CEO of Chainstory, stated that AI is now an integral part of crypto journalism but has not replaced human reporters. He emphasised balancing AI help with human insight to keep readers’ trust, since current AI can’t manage complex, nuanced stories.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft enters AI-powered 3D modelling race with Copilot 3D

Microsoft has launched Copilot 3D, an AI-powered tool that transforms 2D images into realistic 3D models without requiring specialist skills. Available through Copilot Labs, it aims to make 3D creation faster, more accessible, and more intuitive for global users signed in with a Microsoft account.

The tool supports only image-to-3D conversion, with no text-to-3D capability. Users can upload images up to 10 MB, generate a model, and download it in GLB format. Microsoft states uploaded images are used solely for model generation and are not retained for training or personalisation.

Copilot 3D is designed for applications that range from prototyping and creative exploration to interactive learning, thereby reducing the steep learning curve associated with conventional 3D programs. It can be used on PCs or mobile browsers; however, Microsoft recommends a desktop experience for optimal results.

Tech rivals are also advancing similar tools. Apple’s Matrix3D model can build 3D scenes from images, while Meta’s 3D Gen AI system creates 3D assets from text or applies textures to existing models. Nvidia’s NeRF technology generates realistic 3D scenes from multiple 2D images.

The release underscores growing competition in AI-driven 3D design, as companies race to make advanced modelling tools more accessible to everyday creators.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Why AI coding tools may follow the path of past tech revolutions

In mid-2025, the debate over AI in programming mirrors historic resistance to earlier breakthroughs in computing. Critics say current AI coding tools often slow developers and create overconfidence, while supporters argue they will eventually transform software creation.

The Register compares this moment to the 1950s, when Grace Hopper faced opposition to high-level programming languages. Similar scepticism greeted technologies such as C, Java, and intermediate representation, which later became integral to modern computing.

Current AI tools face limits in resources, business models, and capability. Yet, as past trends show, these constraints may fade as hardware, training, and developer practices improve. Advocates believe AI will shift human effort toward design and problem definition rather than manual coding.

For now, adoption remains a mixed blessing, with performance issues and unrealistic expectations. But history suggests that removing barriers between ideas and results catalyses lasting change.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google works to curb Gemini’s endless self-criticism

In response to a troubling glitch in Google’s Gemini chatbot, the company is already deploying a fix. Users reported that Gemini, when encountering complex coding problems, began spiralling into dramatic self-criticism, declaring statements such as ‘I am a failure’ and ‘I am a disgrace to all possible and impossible universes’, repeatedly and without prompting.

Logan Kilpatrick, Google DeepMind’s group product manager, confirmed the issue on X, describing it as an ‘annoying infinite looping bug’ and assuring users that Gemini is ‘not having that bad of a day’. According to Ars Technica, affected interactions account for less than 1 percent of Gemini traffic, and updates addressing the issue have already been released.

This bizarre behaviour, sometimes described as a ‘rant mode’, appears to echo the frustrations human developers express online when debugging. Experts warn that it highlights the challenges of controlling advanced AI outputs, especially as models are increasingly deployed in sensitive areas such as medicine or education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek’s efficiency forces OpenAI to rethink closed AI model strategy

OpenAI has released reasoning-focused open-weight models in a strategic response to China’s surging AI ecosystem, led by DeepSeek’s disruptive efficiency. Unlike earlier coverage, the shift is framed not merely as competitive posturing but as a deeper recognition of shifting innovation philosophies.

DeepSeek’s rise stems from maximizing limited resources under the US’s export restrictions, proving that top-tier AI doesn’t require massive chip clusters. The agility has emboldened the open-source AI sector in China, where over 10 labs now rival those in the US, fundamentally reshaping competitive dynamics.

OpenAI’s ‘gpt-oss’ models, which reveal numerical parameters for customization, mark a departure from its traditional closed approach. Industry watchers see this as a hybrid play, retaining proprietary strengths while embracing openness to appeal to global developers.

The implications stretch beyond technology into geopolitics. US export controls may have inadvertently fueled Chinese AI innovation, with DeepSeek’s self-reliant architecture now serving as a proof point for resilience. DeepSeek’s achievement challenges the US’s historically resource-intensive approach to AI.

AI rivalry may spur collaboration or escalate competition. DeepSeek advances models like DeepSeek-MoE, while OpenAI strikes a balance between openness and monetization. Global AI dynamics shift, raising both technological and philosophical stakes.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Altman warns of harmful AI use after model backlash

OpenAI chief executive Sam Altman has warned that many ChatGPT users are engaging with AI in self-destructive ways. His comments follow backlash over the sudden discontinuation of GPT-4o and other older models, which he admitted was a mistake.

Altman said that users form powerful attachments to specific AI models, and while most can distinguish between reality and fiction, a small minority cannot. He stressed OpenAI’s responsibility to manage the risks for those in mentally fragile states.

Using ChatGPT as a therapist or life coach was not his concern, as many people already benefit from it. Instead, he worried about cases where advice subtly undermines a user’s long-term well-being.

The model removals triggered a huge social-media outcry, with complaints that newer versions offered shorter, less emotionally rich responses. OpenAI has since restored GPT-4o for Plus subscribers, while free users will only have access to GPT-5.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!