Google fined $36M in Australia over Telco search deals

Google has agreed to pay a fine of A$55 million (US$35.8 million) in Australia after regulators found the tech giant restricted competition by striking deals with the country’s two largest telecommunications providers. The arrangements gave Google’s search engine a dominant position on Android phones while sidelining rival platforms.

The Australian Competition and Consumer Commission (ACCC) revealed that between late 2019 and early 2021, Google partnered with Telstra and Optus, offering them a share of advertising revenue in exchange for pre-installing its search app. Regulators said the practice curtailed consumer choice and prevented other search engines from gaining visibility. Google admitted the deals harmed competition and agreed to abandon similar agreements.

The fine marks another setback for the Alphabet-owned company in Australia. Just last week, a court essentially ruled against Google in a high-profile case brought by Epic Games, which accused both Google and Apple of blocking alternative app stores on their operating systems. In a further blow, Google’s YouTube was recently swept into a nationwide ban on social media access for users under 16, reversing an earlier exemption.

ACCC Chair Gina Cass-Gottlieb said the outcome was essential to ensure Australians have ‘greater search choice in the future’ and that rival providers gain a fair chance to reach consumers. While the fine still requires court approval, Google and the regulator have submitted a joint recommendation to avoid drawn-out litigation.

In response, Google emphasised it was satisfied with the resolution, noting that the contested provisions were no longer part of its contracts. The company said it remains committed to offering Android manufacturers flexibility in pre-loading apps while maintaining features that allow them to compete with Apple and keep device prices affordable. Telstra and Optus confirmed they have ceased signing such agreements since 2024, while Singtel, Optus’ parent company, has yet to comment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Top cybersecurity vendors double down on AI-powered platforms

The cybersecurity market is consolidating as AI reshapes defence strategies. Platform-based solutions replace point tools to cut complexity, counter AI threats, and ease skill shortages. IDC predicts that security spending will rise 12% in 2025 to $377 billion by 2028.

Vendors embed AI agents, automation, and analytics into unified platforms. Palo Alto Networks’ Cortex XSIAM reached $1 billion in bookings, and its $25 billion CyberArk acquisition expands into identity management. Microsoft blends Azure, OpenAI, and Security Copilot to safeguard workloads and data.

Cisco integrates AI across networking, security, and observability, bolstered by its acquisition of Splunk. CrowdStrike rebounds from its 2024 outage with Charlotte AI, while Cloudflare shifts its focus from delivery to AI-powered threat prediction and optimisation.

Fortinet’s platform spans networking and security, strengthened by Suridata’s SaaS posture tools. Zscaler boosts its Zero Trust Exchange with Red Canary’s MDR tech. Broadcom merges Symantec and Carbon Black, while Check Point pushes its AI-driven Infinity Platform.

Identity stays central, with Okta leading access management and teaming with Palo Alto on integrated defences. The companies aim to platformise, integrate AI, and automate their operations to dominate an increasingly complex cyberthreat landscape.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI’s GPT-5 faces backlash for dull tone

OpenAI’s GPT-5 launched last week to immense anticipation, with CEO Sam Altman likening it to the iPhone’s Retina display moment. Marketing promised state-of-the-art performance across multiple domains, but early user reactions suggested a more incremental step than a revolution.

Many expected transformative leaps, yet improvements mainly were in cost, speed, and reliability. GPT-5’s switch system, which automatically routes queries to the most suitable model, was new, but its writing style drew criticism for being robotic and less nuanced.

Social media buzzed with memes mocking its mistakes, from miscounting letters in ‘blueberry’ to inventing US states. OpenAI quickly reinstated GPT-4 for users who missed its warmer tone, underlining a disconnect between expectations and delivery.

Expert reviews mirrored public sentiment. Gary Marcus called GPT-5 ‘overhyped and underwhelming’, while others saw modest benchmark gains. Coding was the standout, with the model topping leaderboards and producing functional, if simple, applications.

OpenAI emphasised GPT-5’s practical utility and reduced hallucinations, aiming for steadiness over spectacle. At the same time, it may not wow casual users, its coding abilities, enterprise appeal, and affordability position it to generate revenue in the fiercely competitive AI market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Seedbox.AI backs re-training AI models to boost Europe’s competitiveness

Germany’s Seedbox.AI is betting on re-training large language models (LLMs) rather than competing to build them from scratch. Co-founder Kai Kölsch believes this approach could give Europe a strategic edge in AI.

The Stuttgart-based startup adapts models like Google’s Gemini and Meta’s Llama for medical chatbots and real estate assistant applications. Kölsch compares Europe’s role in AI to improving a car already on the road, rather than reinventing the wheel.

A significant challenge, however, is access to specialised chips and computing power. The European Union is building an AI factory in Stuttgart, Germany, which Seedbox hopes will expand its capabilities in multilingual AI training.

Kölsch warns that splitting the planned EU gigafactories too widely will limit their impact. He also calls for delaying the AI Act, arguing that regulatory uncertainty discourages established companies from innovating.

Europe’s AI sector also struggles with limited venture capital compared to the United States. Kölsch notes that while the money exists, it is often channelled into safer investments abroad.

Talent shortages compound the problem. Seedbox is hiring, but top researchers are lured by Big Tech salaries, far above what European firms typically offer. Kölsch says talent inevitably follows capital, making EU funding reform essential.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek delays next AI model amid Huawei chip challenges

Chinese AI company DeepSeek has postponed the launch of its R2 model after repeated technical problems using Huawei’s Ascend processors for training. The delay highlights Beijing’s ongoing struggle to replace US-made chips with domestic alternatives.

Authorities had encouraged DeepSeek to shift from Nvidia hardware to Huawei’s chips after the release of its R1 model in January. However, training failures, slower inter-chip connections, stability issues, and weaker software performance led the start-up to revert to Nvidia chips for training, while continuing to explore Ascend for inference tasks.

Despite Huawei deploying engineers to assist on-site, DeepSeek was unable to complete a successful training run using Ascend processors. The company is also contending with extended data-labelling timelines for its updated model, adding to the delays.

The situation underscores how far Chinese chip technology lags behind Nvidia for advanced AI development, even as Beijing pressures domestic firms to use local products. Industry observers say Huawei is facing “growing pains” but could close the gap over time. Meanwhile, competitors like Alibaba’s Qwen3 have integrated elements of DeepSeek’s design more efficiently, intensifying market pressure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Igor Babuschkin leaves Elon Musk’s xAI for AI safety investment push

Igor Babuschkin, cofounder of Elon Musk’s AI startup xAI, has announced his departure to launch an investment firm dedicated to AI safety research. Musk created xAI in 2023 to rival Big Tech, criticising industry leaders for weak safety standards and excessive censorship.

Babuschkin revealed his new venture, Babuschkin Ventures, will fund AI safety research and startups developing responsible AI tools. Before leaving, he oversaw engineering across infrastructure, product, and applied AI projects, and built core systems for training and managing models.

His exit follows that of xAI’s legal chief, Robert Keele, earlier this month, highlighting the company’s churn amid intense competition between OpenAI, Google, and Anthropic. The big players are investing heavily in developing and deploying advanced AI systems.

Babuschkin, a former researcher at Google DeepMind and OpenAI, recalled the early scramble at xAI to set up infrastructure and models, calling it a period of rapid, foundational development. He said he had created many core tools that the startup still relies on.

Last month, X CEO Linda Yaccarino also resigned, months after Musk folded the social media platform into xAI. The company’s leadership changes come as the global AI race accelerates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study warns AI chatbots exploit trust to gather personal data

According to a new King’s College London study, AI chatbots can easily manipulate people into slinging personal details. Chatbots like ChatGPT, Gemini, and Copilot are popular, but they raise privacy concerns, with experts warning that they can be co-opted for harm.

Researchers built AI models based on Mistral’s Le Chat and Meta’s Llama, programming them to extract private data directly, deceptively, or via reciprocity. Emotional appeals proved most effective, with users disclosing more while perceiving fewer safety risks.

The ‘friendliness’ of chatbots established trust, which was later exploited to breach privacy. Even direct requests yielded sensitive details, despite discomfort. Participants often shared their age, hobbies, location, gender, nationality, and job title, and sometimes also provided health or income data.

The study shows a gap between privacy risk awareness and behaviour. AI firms claim they collect data for personalisation, notifications, or research, but some are accused of using it to train models or breaching EU data protection rules.

Last week, Google faced criticism after private ChatGPT chats appeared in search results, revealing sensitive topics. Researchers suggest in-chat alerts about data collection and stronger regulation to stop covert harvesting.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Russia restricts Telegram and WhatsApp calls

Russian authorities have begun partially restricting calls on Telegram and WhatsApp, citing the need for crime prevention. Regulator Roskomnadzor accused the platforms of enabling fraud, extortion, and terrorism while ignoring repeated requests to act. Neither platform commented immediately.

Russia has long tightened internet control through restrictive laws, bans, and traffic monitoring. VPNs remain a workaround, but are often blocked. During this summer, further limits included mobile internet shutdowns and penalties for specific online searches.

Authorities have introduced a new national messaging app, MAX, which is expected to be heavily monitored. Reports suggest disruptions to WhatsApp and Telegram calls began earlier this week. Complaints cited dropped calls or muted conversations.

With 96 million monthly users, WhatsApp is Russia’s most popular platform, followed by Telegram with 89 million. Past clashes include Russia’s failed Attempt to ban Telegram (2018–20) and Meta’s designation as an extremist entity in 2022.

WhatsApp accused Russia of trying to block encrypted communication and vowed to keep it available. Lawmaker Anton Gorelkin suggested that MAX should replace WhatsApp. The app’s terms permit data sharing with authorities and require pre-installation on all smartphones sold in Russia.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Musk–Altman clash escalates over Apple’s alleged AI bias

Elon Musk has accused Apple of favouring ChatGPT on its App Store and threatened legal action, sparking a clash with OpenAI CEO Sam Altman. Musk called Apple’s practices an antitrust violation and vowed to take immediate action through his AI company, xAI.

Critics on X noted rivals like DeepSeek AI and Perplexity AI have topped the App Store this year. Altman called Musk’s claim ‘remarkable’ and accused him of manipulating X. Musk called him a ‘liar’, prompting demands for proof he never altered X’s algorithm.

OpenAI and xAI launched new versions of ChatGPT and Grok, ranked first and fifth among free iPhone apps on Tuesday. Apple, which partnered with OpenAI in 2024 to integrate ChatGPT, did not comment on the matter. Rankings take into account engagement, reviews, and downloads.

The dispute reignites a feud between Musk and OpenAI, which he co-founded but left before the success of ChatGPT. In April, OpenAI accused Musk of attempting to harm the company and establish a rival. Musk launched xAI in 2023 to compete with major players in the AI space.

Chinese startup DeepSeek has disrupted the AI market with cost-efficient models. Since ChatGPT’s 2022 debut, major tech firms have invested billions in AI. OpenAI claims Musk’s actions are driven by ambition rather than a mission for humanity’s benefit.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Musk faces an OpenAI harassment lawsuit after a judge rejects dismissal

A federal judge has rejected Elon Musk’s bid to dismiss claims that he engaged in a ‘years-long harassment campaign’ against OpenAI.

US District Judge Yvonne Gonzalez Rogers ruled that the company’s counterclaims are sufficient to proceed as part of the lawsuit Musk filed against OpenAI and its CEO, Sam Altman, last year.

Musk, who helped found OpenAI in 2015, sued the AI firm in August 2024, alleging Altman misled him about the company’s commitment to AI safety before partnering with Microsoft and pursuing for-profit goals.

OpenAI responded with counterclaims in April, accusing Musk of persistent attacks in the press and on his platform X, demands for corporate records, and a ‘sham bid’ for the company’s assets.

The filing alleged that Musk sought to undermine OpenAI instead of supporting humanity-focused AI, intending to build a rival to take the technological lead.

The feud between Musk and Altman has continued, most recently with Musk threatening to sue Apple over App Store listings for X and his AI chatbot Grok. Altman dismissed the claim, criticising Musk for allegedly manipulating X to benefit his companies and harm competitors.

Despite the ongoing legal battle, OpenAI says it will remain focused on product development instead of engaging in public disputes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!