Igor Babuschkin leaves Elon Musk’s xAI for AI safety investment push

Igor Babuschkin, cofounder of Elon Musk’s AI startup xAI, has announced his departure to launch an investment firm dedicated to AI safety research. Musk created xAI in 2023 to rival Big Tech, criticising industry leaders for weak safety standards and excessive censorship.

Babuschkin revealed his new venture, Babuschkin Ventures, will fund AI safety research and startups developing responsible AI tools. Before leaving, he oversaw engineering across infrastructure, product, and applied AI projects, and built core systems for training and managing models.

His exit follows that of xAI’s legal chief, Robert Keele, earlier this month, highlighting the company’s churn amid intense competition between OpenAI, Google, and Anthropic. The big players are investing heavily in developing and deploying advanced AI systems.

Babuschkin, a former researcher at Google DeepMind and OpenAI, recalled the early scramble at xAI to set up infrastructure and models, calling it a period of rapid, foundational development. He said he had created many core tools that the startup still relies on.

Last month, X CEO Linda Yaccarino also resigned, months after Musk folded the social media platform into xAI. The company’s leadership changes come as the global AI race accelerates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How Anthropic trains and tests Claude for safe use

Anthropic has outlined a multi-layered safety plan for Claude, aiming to keep it useful while preventing misuse. Its Safeguards team blends policy experts, engineers, and threat analysts to anticipate and counter risks.

The Usage Policy establishes clear guidelines for sensitive areas, including elections, finance, and child safety. Guided by the Unified Harm Framework, the team assesses potential physical, psychological, and societal harms, utilizing external experts for stress tests.

During the 2024 US elections, a TurboVote banner was added after detecting outdated voting info, ensuring users saw only accurate, non-partisan updates.

Safety is built into development, with guardrails to block illegal or malicious requests. Partnerships like ThroughLine help Claude handle sensitive topics, such as mental health, with care rather than avoidance or refusal.

Before launch, Claude undergoes safety, risk, and bias evaluations with government and industry partners. Once live, classifiers scan for violations in real time, while analysts track patterns of coordinated misuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Age checks slash visits to top UK adult websites

Adult site traffic in the UK has fallen dramatically since the new age verification rules were enacted on 25 July under the Online Safety Act.

Figures from analytics firm Similarweb show Pornhub lost more than one million visitors in just two weeks, with traffic falling by 47%. XVideos saw a similar drop, while OnlyFans traffic fell by more than 10%.

The rules require adult websites to make it harder for under-18s to access explicit material, leading some users to turn to smaller and less regulated sites instead of compliant platforms. Pornhub said the trend mirrored patterns seen in other countries with similar laws.

The clampdown has also triggered a surge in virtual private network (VPN) downloads in the UK, as the tools can hide a user’s location and help bypass restrictions.

Ofcom estimates that 14 million people in the UK watch pornography and has proposed age checks using credit cards, photo ID, or AI analysis of selfies.

Critics argue that instead of improving safety, the measures may drive people towards more extreme or illicit material on harder-to-monitor parts of the internet, including the dark web.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study warns AI chatbots exploit trust to gather personal data

According to a new King’s College London study, AI chatbots can easily manipulate people into slinging personal details. Chatbots like ChatGPT, Gemini, and Copilot are popular, but they raise privacy concerns, with experts warning that they can be co-opted for harm.

Researchers built AI models based on Mistral’sMistral’s Le Chat and Meta’s Llama, programming them to extract private data directly, deceptively, or via reciprocity. Emotional appeals proved most effective, with users disclosing more while perceiving fewer safety risks.

The ‘friendliness’ of chatbots established trust, which was later exploited to breach privacy. Even direct requests yielded sensitive details, despite discomfort. Participants often shared their age, hobbies, location, gender, nationality, and job title, and sometimes also provided health or income data.

The study shows a gap between privacy risk awareness and behaviour. AI firms claim they collect data for personalisation, notifications, or research, but some are accused of using it to train models or breaching EU data protection rules.

Last week, Google faced criticism after private ChatGPT chats appeared in search results, revealing sensitive topics. Researchers suggest in-chat alerts about data collection and stronger regulation to stop covert harvesting.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Russia restricts Telegram and WhatsApp calls

Russian authorities have begun partially restricting calls on Telegram and WhatsApp, citing the need for crime prevention. Regulator Roskomnadzor accused the platforms of enabling fraud, extortion, and terrorism while ignoring repeated requests to act. Neither platform commented immediately.

Russia has long tightened internet control through restrictive laws, bans, and traffic monitoring. VPNs remain a workaround, but are often blocked. During this summer, further limits included mobile internet shutdowns and penalties for specific online searches.

Authorities have introduced a new national messaging app, MAX, which is expected to be heavily monitored. Reports suggest disruptions to WhatsApp and Telegram calls began earlier this week. Complaints cited dropped calls or muted conversations.

With 96 million monthly users, WhatsApp is Russia’s most popular platform, followed by Telegram with 89 million. Past clashes include Russia’s failed Attempt to ban Telegram (2018–20) and Meta’s designation as an extremist entity in 2022.

WhatsApp accused Russia of trying to block encrypted communication and vowed to keep it available. Lawmaker Anton Gorelkin suggested that MAX should replace WhatsApp. The app’s terms permit data sharing with authorities and require pre-installation on all smartphones sold in Russia.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Netherlands regulator presses tech firms over election disinformation

The Netherlands’ competition authority will meet with 12 major online platforms, including TikTok, Facebook and X, on 15 September to address the spread before the 29 October elections.

The session will also involve the European Commission, national regulators and civil society groups.

The Authority for Consumers and Markets (ACM), which enforces the EU’s Digital Services Act in the Netherlands, is mandated to oversee election integrity under the law. The vote was called early in June after the Dutch government collapsed over migration policy disputes.

Platforms designated as Very Large Online Platforms must uphold transparent policies for moderating content and act decisively against illegal material, ACM director Manon Leijten said.

In July, the ACM contacted the platforms to outline their legal obligations, request details for their Trust and Safety teams and collect responses to a questionnaire on safeguarding public debate.

The September meeting will evaluate how companies plan to tackle disinformation, foreign interference and illegal hate speech during the campaign period.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google backs workforce and AI education in Oklahoma with a $9 billion investment

Google has announced a $9 billion investment in Oklahoma over the next two years to expand cloud and AI infrastructure.

The funds will support a new data centre campus in Stillwater and an expansion of the existing facility in Pryor, forming part of a broader $1 billion commitment to American education and competitiveness.

The announcement was made alongside Governor Kevin Stitt, Alphabet and Google executives, and community leaders.

Alongside the infrastructure projects, Google funds education and workforce initiatives with the University of Oklahoma and Oklahoma State University through the Google AI for Education Accelerator.

Students will gain no-cost access to Career Certificates and AI training courses, helping them acquire critical AI and job-ready skills instead of relying on standard curricula.

Additional funding will support ALLIANCE’s electrical training to expand Oklahoma’s electrical workforce by 135%, creating the talent needed to power AI-driven energy infrastructure.

Google described the investment as part of an ‘extraordinary time for American innovation’ and a step towards maintaining US leadership in AI.

The move also addresses national security concerns, ensuring the country has the infrastructure and expertise to compete with domestic rivals like OpenAI and Anthropic, as well as international competitors such as China’s DeepSeek.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Con artists pose as lawyers to steal from crypto scam victims

A new fraud tactic is emerging, with con artists posing as lawyers to target cryptocurrency scam victims. They exploit desperation by promising to recover lost funds, using elaborate ruses like fabricated government partnerships and forged documents.

Sophisticated tactics, including fake websites and staged WhatsApp chats, pressure people into paying additional fees.

The US Federal Bureau of Investigation has issued a warning about the scam. Fake law firms use detailed knowledge of a victim’s prior losses to appear credible, knowing the exact amounts and dates of fraudulent transactions.

The scheme often escalates when victims are directed to deposit money into what appear to be foreign bank accounts, which are sophisticated facades designed to steal more funds.

The FBI recommends a ‘Zero Trust’ approach to combat fraud. Any unsolicited recovery offer should be met with immediate scepticism. A major red flag is if a representative refuses to appear on camera or provide their licensing details.

The bureau also advises keeping detailed records of all interactions, like emails and video calls, as documentation could prove invaluable for investigators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber-crime group BlackSuit crippled by $1 million crypto seizure

Law enforcement agencies in the United States and abroad have coordinated a raid to dismantle the BlackSuit ransomware operation, seizing servers and domains and approximately $1 million in cryptocurrency linked to ransom demands.

The action, led by the Department of Justice, Homeland Security Investigations, the Secret Service, the IRS and the FBI, involved cooperation with agencies across the UK, Germany, France, Canada, Ukraine, Ireland and Lithuania.

BlackSuit, a rebranded successor to the Royal ransomware gang and connected to the notorious Conti group, has been active since 2022. It has targeted over 450 US organisations across healthcare, government, manufacturing and education sectors, demanding more than $370 million in ransoms.

The crypto seized was traced back to a 2023 ransom payment of around 49.3 Bitcoin, valued at approximately $1.4 million. Investigators worked with cryptocurrency exchanges to freeze and recover roughly $1 million of those funds in early 2024.

While this marks a significant blow to the gang’s operations, officials warn that without arrests, the threat may persist or re-emerge under new identities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New digital headquarters aims to embed AI across Kazakhstan’s public services

Prime Minister Olzhas Bektenov established a digital transformation group, or digital headquarters, to advance AI integration across Kazakhstan, following President Tokayev’s directives on 11 August 2025.

The group includes senior officials, such as the deputy prime minister, the head of strategic planning, the minister of digital development, innovation, and aerospace industry, and the presidential digitalisation advisor. The group is tasked with implementing nine priority areas outlined by the president.

These span AI deployment in the economy, public administration, and healthcare; digital strategy development; IT architecture modernisation; cybersecurity; support for IT startups; the national QazTech platform; and innovative city initiatives.

A significant plan component involves crafting a roadmap with the Samruk Kazyna Sovereign Wealth Fund to embed AI in production and boost labour productivity. AI solutions are expected to improve diagnostics, personalise treatment, enable continuous patient monitoring, and streamline workflows in healthcare. Startups will gain access to the Ministry of Health infrastructure and integration into a unified medical database.

Consolidating government communication via the Aitu national messenger, IT modernisation, and strengthened cybersecurity aims to create a seamless, safe digital environment for citizens. The emphasis is swift collaboration to address AI integration challenges across all sectors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!