Seedbox.AI backs re-training AI models to boost Europe’s competitiveness

Germany’s Seedbox.AI is betting on re-training large language models (LLMs) rather than competing to build them from scratch. Co-founder Kai Kölsch believes this approach could give Europe a strategic edge in AI.

The Stuttgart-based startup adapts models like Google’s Gemini and Meta’s Llama for medical chatbots and real estate assistant applications. Kölsch compares Europe’s role in AI to improving a car already on the road, rather than reinventing the wheel.

A significant challenge, however, is access to specialised chips and computing power. The European Union is building an AI factory in Stuttgart, Germany, which Seedbox hopes will expand its capabilities in multilingual AI training.

Kölsch warns that splitting the planned EU gigafactories too widely will limit their impact. He also calls for delaying the AI Act, arguing that regulatory uncertainty discourages established companies from innovating.

Europe’s AI sector also struggles with limited venture capital compared to the United States. Kölsch notes that while the money exists, it is often channelled into safer investments abroad.

Talent shortages compound the problem. Seedbox is hiring, but top researchers are lured by Big Tech salaries, far above what European firms typically offer. Kölsch says talent inevitably follows capital, making EU funding reform essential.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek delays next AI model amid Huawei chip challenges

Chinese AI company DeepSeek has postponed the launch of its R2 model after repeated technical problems using Huawei’s Ascend processors for training. The delay highlights Beijing’s ongoing struggle to replace US-made chips with domestic alternatives.

Authorities had encouraged DeepSeek to shift from Nvidia hardware to Huawei’s chips after the release of its R1 model in January. However, training failures, slower inter-chip connections, stability issues, and weaker software performance led the start-up to revert to Nvidia chips for training, while continuing to explore Ascend for inference tasks.

Despite Huawei deploying engineers to assist on-site, DeepSeek was unable to complete a successful training run using Ascend processors. The company is also contending with extended data-labelling timelines for its updated model, adding to the delays.

The situation underscores how far Chinese chip technology lags behind Nvidia for advanced AI development, even as Beijing pressures domestic firms to use local products. Industry observers say Huawei is facing “growing pains” but could close the gap over time. Meanwhile, competitors like Alibaba’s Qwen3 have integrated elements of DeepSeek’s design more efficiently, intensifying market pressure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Igor Babuschkin leaves Elon Musk’s xAI for AI safety investment push

Igor Babuschkin, cofounder of Elon Musk’s AI startup xAI, has announced his departure to launch an investment firm dedicated to AI safety research. Musk created xAI in 2023 to rival Big Tech, criticising industry leaders for weak safety standards and excessive censorship.

Babuschkin revealed his new venture, Babuschkin Ventures, will fund AI safety research and startups developing responsible AI tools. Before leaving, he oversaw engineering across infrastructure, product, and applied AI projects, and built core systems for training and managing models.

His exit follows that of xAI’s legal chief, Robert Keele, earlier this month, highlighting the company’s churn amid intense competition between OpenAI, Google, and Anthropic. The big players are investing heavily in developing and deploying advanced AI systems.

Babuschkin, a former researcher at Google DeepMind and OpenAI, recalled the early scramble at xAI to set up infrastructure and models, calling it a period of rapid, foundational development. He said he had created many core tools that the startup still relies on.

Last month, X CEO Linda Yaccarino also resigned, months after Musk folded the social media platform into xAI. The company’s leadership changes come as the global AI race accelerates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study warns AI chatbots exploit trust to gather personal data

According to a new King’s College London study, AI chatbots can easily manipulate people into slinging personal details. Chatbots like ChatGPT, Gemini, and Copilot are popular, but they raise privacy concerns, with experts warning that they can be co-opted for harm.

Researchers built AI models based on Mistral’s Le Chat and Meta’s Llama, programming them to extract private data directly, deceptively, or via reciprocity. Emotional appeals proved most effective, with users disclosing more while perceiving fewer safety risks.

The ‘friendliness’ of chatbots established trust, which was later exploited to breach privacy. Even direct requests yielded sensitive details, despite discomfort. Participants often shared their age, hobbies, location, gender, nationality, and job title, and sometimes also provided health or income data.

The study shows a gap between privacy risk awareness and behaviour. AI firms claim they collect data for personalisation, notifications, or research, but some are accused of using it to train models or breaching EU data protection rules.

Last week, Google faced criticism after private ChatGPT chats appeared in search results, revealing sensitive topics. Researchers suggest in-chat alerts about data collection and stronger regulation to stop covert harvesting.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Russia restricts Telegram and WhatsApp calls

Russian authorities have begun partially restricting calls on Telegram and WhatsApp, citing the need for crime prevention. Regulator Roskomnadzor accused the platforms of enabling fraud, extortion, and terrorism while ignoring repeated requests to act. Neither platform commented immediately.

Russia has long tightened internet control through restrictive laws, bans, and traffic monitoring. VPNs remain a workaround, but are often blocked. During this summer, further limits included mobile internet shutdowns and penalties for specific online searches.

Authorities have introduced a new national messaging app, MAX, which is expected to be heavily monitored. Reports suggest disruptions to WhatsApp and Telegram calls began earlier this week. Complaints cited dropped calls or muted conversations.

With 96 million monthly users, WhatsApp is Russia’s most popular platform, followed by Telegram with 89 million. Past clashes include Russia’s failed Attempt to ban Telegram (2018–20) and Meta’s designation as an extremist entity in 2022.

WhatsApp accused Russia of trying to block encrypted communication and vowed to keep it available. Lawmaker Anton Gorelkin suggested that MAX should replace WhatsApp. The app’s terms permit data sharing with authorities and require pre-installation on all smartphones sold in Russia.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Musk–Altman clash escalates over Apple’s alleged AI bias

Elon Musk has accused Apple of favouring ChatGPT on its App Store and threatened legal action, sparking a clash with OpenAI CEO Sam Altman. Musk called Apple’s practices an antitrust violation and vowed to take immediate action through his AI company, xAI.

Critics on X noted rivals like DeepSeek AI and Perplexity AI have topped the App Store this year. Altman called Musk’s claim ‘remarkable’ and accused him of manipulating X. Musk called him a ‘liar’, prompting demands for proof he never altered X’s algorithm.

OpenAI and xAI launched new versions of ChatGPT and Grok, ranked first and fifth among free iPhone apps on Tuesday. Apple, which partnered with OpenAI in 2024 to integrate ChatGPT, did not comment on the matter. Rankings take into account engagement, reviews, and downloads.

The dispute reignites a feud between Musk and OpenAI, which he co-founded but left before the success of ChatGPT. In April, OpenAI accused Musk of attempting to harm the company and establish a rival. Musk launched xAI in 2023 to compete with major players in the AI space.

Chinese startup DeepSeek has disrupted the AI market with cost-efficient models. Since ChatGPT’s 2022 debut, major tech firms have invested billions in AI. OpenAI claims Musk’s actions are driven by ambition rather than a mission for humanity’s benefit.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Musk faces an OpenAI harassment lawsuit after a judge rejects dismissal

A federal judge has rejected Elon Musk’s bid to dismiss claims that he engaged in a ‘years-long harassment campaign’ against OpenAI.

US District Judge Yvonne Gonzalez Rogers ruled that the company’s counterclaims are sufficient to proceed as part of the lawsuit Musk filed against OpenAI and its CEO, Sam Altman, last year.

Musk, who helped found OpenAI in 2015, sued the AI firm in August 2024, alleging Altman misled him about the company’s commitment to AI safety before partnering with Microsoft and pursuing for-profit goals.

OpenAI responded with counterclaims in April, accusing Musk of persistent attacks in the press and on his platform X, demands for corporate records, and a ‘sham bid’ for the company’s assets.

The filing alleged that Musk sought to undermine OpenAI instead of supporting humanity-focused AI, intending to build a rival to take the technological lead.

The feud between Musk and Altman has continued, most recently with Musk threatening to sue Apple over App Store listings for X and his AI chatbot Grok. Altman dismissed the claim, criticising Musk for allegedly manipulating X to benefit his companies and harm competitors.

Despite the ongoing legal battle, OpenAI says it will remain focused on product development instead of engaging in public disputes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Crypto wallet apps must now comply with new Google Play rules

Google Play is introducing new policies for cryptocurrency wallet applications. The new rules will require them to be licensed in more than fifteen countries, including the United States and the European Union.

The changes, which come into effect on 29 October, will require providers in the US to register as a money services business or money transmitter. Those in the EU, meanwhile, must register as a crypto-asset service provider.

The updated rules, which aim to ensure compliance with industry standards, will not apply to non-custodial wallets. Following initial concerns from the crypto community, Google clarified the policy on X, stating that non-custodial apps are not in scope.

The new regulations could lead to a broader adoption of Know Your Customer checks and other anti-money laundering measures for the affected apps.

Google Play has a mixed history with cryptocurrency, having previously banned crypto mining apps in 2018 and removed several crypto news and video games. In 2021, the company removed several deceptive apps for allegedly tricking users into paying for an illegitimate cloud service.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Perplexity AI offers US$34.5b for Google Chrome

Perplexity AI has made a surprise US$34.5 billion offer to acquire Google’s Chrome browser, which could align with antitrust measures under consideration in the US.

The San Francisco-based startup submitted the proposal in a letter of intent, claiming it would keep Chrome independent while prioritising openness and consumer protection.

The bid arrives as Google awaits a court ruling on potential remedies after being found to have maintained an illegal monopoly in online search.

US government lawyers have suggested Chrome’s divestment instead of allowing Google to strengthen its dominance through AI. Google has urged the court to reject such a move, warning that a sale could harm innovation and reduce quality.

Analysts at Baird Equity Research said Perplexity’s offer undervalues Chrome and may be intended to prompt rival bids or influence the judge’s decision. They added that Perplexity, which already operates its browser, could gain an advantage if Chrome became independent.

Google argues that most Chrome users are outside the US, meaning a forced sale would have global implications. The ruling is expected by the end of August, with the outcome likely to reshape the competitive landscape for browsers as AI increasingly shapes how users access the internet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK-based ODI outlines vision for EU AI Act and data policy

The Open Data Institute (ODI) has published a manifesto setting out six principles for shaping European Union policy on AI and data. Aimed at supporting policymakers, it aligns with the EU’s upcoming digital reforms, including the AI Act and the review of the bloc’s digital framework.

Although based in the UK, the ODI has previously contributed to EU policymaking, including work on the General-Purpose AI Code of Practice and consultations on the use of health data. The organisation also launched a similar manifesto for UK data and AI policy in 2024.

The ODI states that the EU has a chance to establish a global model of digital governance, prioritizing people’s interests. Director of research Elena Simperl called for robust open data infrastructure, inclusive participation, and independent oversight to build trust, support innovation, and protect values.

Drawing on the EU’s Competitiveness Compass and the Draghi report, the six principles are: data infrastructure, open data, trust, independent organisations, an inclusive data ecosystem, and data skills. The goal is to balance regulation and innovation while upholding rights, values, and interoperability.

The ODI highlights the need to limit bias and inequality, broaden access to data and skills, and support smaller enterprises. It argues that strong governance should be treated like physical infrastructure, enabling competitiveness while safeguarding rights and public trust in the AI era.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!