DeepSeek delays next AI model amid Huawei chip challenges

Chinese AI company DeepSeek has postponed the launch of its R2 model after repeated technical problems using Huawei’s Ascend processors for training. The delay highlights Beijing’s ongoing struggle to replace US-made chips with domestic alternatives.

Authorities had encouraged DeepSeek to shift from Nvidia hardware to Huawei’s chips after the release of its R1 model in January. However, training failures, slower inter-chip connections, stability issues, and weaker software performance led the start-up to revert to Nvidia chips for training, while continuing to explore Ascend for inference tasks.

Despite Huawei deploying engineers to assist on-site, DeepSeek was unable to complete a successful training run using Ascend processors. The company is also contending with extended data-labelling timelines for its updated model, adding to the delays.

The situation underscores how far Chinese chip technology lags behind Nvidia for advanced AI development, even as Beijing pressures domestic firms to use local products. Industry observers say Huawei is facing “growing pains” but could close the gap over time. Meanwhile, competitors like Alibaba’s Qwen3 have integrated elements of DeepSeek’s design more efficiently, intensifying market pressure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple plans Siri upgrade with AI robots and smart displays

A tabletop robot, expected by 2027, could follow users around with a moving display and an animated voice assistant. Apple previewed this concept in research earlier this year, showing a dancing robot mimicking user movement.

Siri may soon take on a more visual, animated form, allowing natural conversations similar to ChatGPT’s voice mode. Apple is testing designs based on Memoji and the Finder icon.

A new smart home display will likely launch in 2026, featuring facial recognition and shared user access. Its design reportedly resembles Google’s Nest Hub.

Apple is also developing a range of home security products, including cameras and other devices, forming a new innovative ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT gets local pricing in India

OpenAI has introduced local pricing for ChatGPT in India, allowing users to pay in rupees instead of US dollars. The shift follows the release of GPT-5, which supports 12 Indian languages and offers improved relevance for local users.

India is now the second-largest ChatGPT market outside the US. The Plus plan now costs $24 per month, while the Pro and Team plans are priced at $240 and $25 per seat, respectively.

OpenAI is also expected to launch a lower-cost option called ChatGPT Go, potentially priced at $5 to appeal to casual users. Competitors like Google and Perplexity AI have also responded by offering free access to students and telecom customers to boost adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Employees trust managers less when emails use AI

A new study has revealed that managers who use AI to write emails are often viewed as less sincere by their staff. Acceptance improved for emails focused on factual information, where employees were more forgiving of AI involvement.

Researchers found employees were more critical of AI use by their supervisors than when using it themselves, even if the level of assistance was the same.

Only 40 percent of respondents rated managers as sincere when their emails involved high AI input, compared to 83 percent for lighter use.

Professionals did consider AI-assisted emails efficient and polished, but trust declined when messages were relationship-driven or motivational.

Researchers highlighted that managers’ heavier reliance on AI may undermine trust, care, and authenticity perceptions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India pushes for safe AI use in financial sector

India’s central bank has proposed a national framework to guide the ethical and responsible use of AI in the financial sector.

The committee, set up by the Reserve Bank of India in December 2024, has made 26 recommendations across six focus areas, including infrastructure, governance, and assurance.

It advised establishing a digital backbone to support homegrown AI models and forming a multi-stakeholder body to evaluate risks.

A dedicated fund to boost domestic AI development tailored for finance was also proposed, alongside audit guidelines and policy frameworks.

The committee recommended integrating AI into platforms such as UPI while preserving public trust and ensuring security.

Led by IIT Bombay’s Pushpak Bhattacharyya, the panel noted the need to balance innovation with risk mitigation in regulatory design.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Igor Babuschkin leaves Elon Musk’s xAI for AI safety investment push

Igor Babuschkin, cofounder of Elon Musk’s AI startup xAI, has announced his departure to launch an investment firm dedicated to AI safety research. Musk created xAI in 2023 to rival Big Tech, criticising industry leaders for weak safety standards and excessive censorship.

Babuschkin revealed his new venture, Babuschkin Ventures, will fund AI safety research and startups developing responsible AI tools. Before leaving, he oversaw engineering across infrastructure, product, and applied AI projects, and built core systems for training and managing models.

His exit follows that of xAI’s legal chief, Robert Keele, earlier this month, highlighting the company’s churn amid intense competition between OpenAI, Google, and Anthropic. The big players are investing heavily in developing and deploying advanced AI systems.

Babuschkin, a former researcher at Google DeepMind and OpenAI, recalled the early scramble at xAI to set up infrastructure and models, calling it a period of rapid, foundational development. He said he had created many core tools that the startup still relies on.

Last month, X CEO Linda Yaccarino also resigned, months after Musk folded the social media platform into xAI. The company’s leadership changes come as the global AI race accelerates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How Anthropic trains and tests Claude for safe use

Anthropic has outlined a multi-layered safety plan for Claude, aiming to keep it useful while preventing misuse. Its Safeguards team blends policy experts, engineers, and threat analysts to anticipate and counter risks.

The Usage Policy establishes clear guidelines for sensitive areas, including elections, finance, and child safety. Guided by the Unified Harm Framework, the team assesses potential physical, psychological, and societal harms, utilizing external experts for stress tests.

During the 2024 US elections, a TurboVote banner was added after detecting outdated voting info, ensuring users saw only accurate, non-partisan updates.

Safety is built into development, with guardrails to block illegal or malicious requests. Partnerships like ThroughLine help Claude handle sensitive topics, such as mental health, with care rather than avoidance or refusal.

Before launch, Claude undergoes safety, risk, and bias evaluations with government and industry partners. Once live, classifiers scan for violations in real time, while analysts track patterns of coordinated misuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study warns AI chatbots exploit trust to gather personal data

According to a new King’s College London study, AI chatbots can easily manipulate people into slinging personal details. Chatbots like ChatGPT, Gemini, and Copilot are popular, but they raise privacy concerns, with experts warning that they can be co-opted for harm.

Researchers built AI models based on Mistral’s Le Chat and Meta’s Llama, programming them to extract private data directly, deceptively, or via reciprocity. Emotional appeals proved most effective, with users disclosing more while perceiving fewer safety risks.

The ‘friendliness’ of chatbots established trust, which was later exploited to breach privacy. Even direct requests yielded sensitive details, despite discomfort. Participants often shared their age, hobbies, location, gender, nationality, and job title, and sometimes also provided health or income data.

The study shows a gap between privacy risk awareness and behaviour. AI firms claim they collect data for personalisation, notifications, or research, but some are accused of using it to train models or breaching EU data protection rules.

Last week, Google faced criticism after private ChatGPT chats appeared in search results, revealing sensitive topics. Researchers suggest in-chat alerts about data collection and stronger regulation to stop covert harvesting.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Musk–Altman clash escalates over Apple’s alleged AI bias

Elon Musk has accused Apple of favouring ChatGPT on its App Store and threatened legal action, sparking a clash with OpenAI CEO Sam Altman. Musk called Apple’s practices an antitrust violation and vowed to take immediate action through his AI company, xAI.

Critics on X noted rivals like DeepSeek AI and Perplexity AI have topped the App Store this year. Altman called Musk’s claim ‘remarkable’ and accused him of manipulating X. Musk called him a ‘liar’, prompting demands for proof he never altered X’s algorithm.

OpenAI and xAI launched new versions of ChatGPT and Grok, ranked first and fifth among free iPhone apps on Tuesday. Apple, which partnered with OpenAI in 2024 to integrate ChatGPT, did not comment on the matter. Rankings take into account engagement, reviews, and downloads.

The dispute reignites a feud between Musk and OpenAI, which he co-founded but left before the success of ChatGPT. In April, OpenAI accused Musk of attempting to harm the company and establish a rival. Musk launched xAI in 2023 to compete with major players in the AI space.

Chinese startup DeepSeek has disrupted the AI market with cost-efficient models. Since ChatGPT’s 2022 debut, major tech firms have invested billions in AI. OpenAI claims Musk’s actions are driven by ambition rather than a mission for humanity’s benefit.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google backs workforce and AI education in Oklahoma with a $9 billion investment

Google has announced a $9 billion investment in Oklahoma over the next two years to expand cloud and AI infrastructure.

The funds will support a new data centre campus in Stillwater and an expansion of the existing facility in Pryor, forming part of a broader $1 billion commitment to American education and competitiveness.

The announcement was made alongside Governor Kevin Stitt, Alphabet and Google executives, and community leaders.

Alongside the infrastructure projects, Google funds education and workforce initiatives with the University of Oklahoma and Oklahoma State University through the Google AI for Education Accelerator.

Students will gain no-cost access to Career Certificates and AI training courses, helping them acquire critical AI and job-ready skills instead of relying on standard curricula.

Additional funding will support ALLIANCE’s electrical training to expand Oklahoma’s electrical workforce by 135%, creating the talent needed to power AI-driven energy infrastructure.

Google described the investment as part of an ‘extraordinary time for American innovation’ and a step towards maintaining US leadership in AI.

The move also addresses national security concerns, ensuring the country has the infrastructure and expertise to compete with domestic rivals like OpenAI and Anthropic, as well as international competitors such as China’s DeepSeek.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!