Researchers explore brain signals to restore speech for disabled patients

Researchers have developed a brain-computer interface (BCI) that can decode ‘inner speech’ in patients with severe paralysis, potentially enabling faster and more comfortable communication.

The system, tested by a team led by Stanford University’s Frank Willett, records brain activity from the motor cortex using microelectrode arrays smaller than a baby aspirin, translating neural patterns into words via machine learning.

Unlike earlier BCIs that rely on attempted speech, which can be slow or tiring, the new approach focuses on silent imagined speech. Tests with four participants showed that inner speech produces clear, consistent brain signals, though at a smaller scale than attempted speech.

While accuracy is lower, the findings suggest that future systems could restore rapid communication through thought alone.

Privacy concerns have been addressed through methods that prevent unintended decoding. Current BCIs can be trained to ignore inner speech, and a ‘password’ approach for next-generation devices ensures decoding begins only when a specific imagined phrase is used.

Such safeguards are designed to avoid accidental capture of thoughts the user never intended to express.

The technology remains in early development and is subject to strict regulation.

Researchers are now exploring improved, wireless hardware and additional brain regions linked to language and hearing, aiming to enhance decoding accuracy and make the systems more practical in everyday life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bluesky updates rules and invites user feedback ahead of October rollout

Two years after launch, Bluesky is revising its Community Guidelines and other policies, inviting users to comment on the proposed changes before they take effect on 15 October 2025.

The updates are designed to improve clarity, outline safety procedures in more detail, and meet the requirements of new global regulations such as the UK’s Online Safety Act, the EU’s Digital Services Act, and the US’s TAKE IT DOWN Act.

Some changes aim to shape the platform’s tone by encouraging respectful and authentic interactions, while allowing space for journalism, satire, and parody.

The revised guidelines are organised under four principles: Safety First, Respect Others, Be Authentic, and Follow the Rules. They prohibit promoting violence, illegal activity, self-harm, and sexualised depictions of minors, as well as harmful practices like doxxing and non-consensual data-sharing.

Bluesky says it will provide a more detailed appeals process, including an ‘informal dispute resolution’ step, and in some cases will allow court action instead of arbitration.

The platform has also addressed nuanced issues such as deepfakes, hate speech, and harassment, while acknowledging past challenges in moderation and community relations.

Alongside the guidelines, Bluesky has updated its Privacy Policy and Copyright Policy to comply with international laws on data rights, transfer, deletion, takedown procedures and transparency reporting.

These changes will take effect on 15 September 2025 without a public feedback period.

The company’s approach contrasts with larger social networks by introducing direct user communication for disputes, though it still faces the challenge of balancing open dialogue with consistent enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT gets local pricing in India

OpenAI has introduced local pricing for ChatGPT in India, allowing users to pay in rupees instead of US dollars. The shift follows the release of GPT-5, which supports 12 Indian languages and offers improved relevance for local users.

India is now the second-largest ChatGPT market outside the US. The Plus plan now costs $24 per month, while the Pro and Team plans are priced at $240 and $25 per seat, respectively.

OpenAI is also expected to launch a lower-cost option called ChatGPT Go, potentially priced at $5 to appeal to casual users. Competitors like Google and Perplexity AI have also responded by offering free access to students and telecom customers to boost adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Employees trust managers less when emails use AI

A new study has revealed that managers who use AI to write emails are often viewed as less sincere by their staff. Acceptance improved for emails focused on factual information, where employees were more forgiving of AI involvement.

Researchers found employees were more critical of AI use by their supervisors than when using it themselves, even if the level of assistance was the same.

Only 40 percent of respondents rated managers as sincere when their emails involved high AI input, compared to 83 percent for lighter use.

Professionals did consider AI-assisted emails efficient and polished, but trust declined when messages were relationship-driven or motivational.

Researchers highlighted that managers’ heavier reliance on AI may undermine trust, care, and authenticity perceptions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India pushes for safe AI use in financial sector

India’s central bank has proposed a national framework to guide the ethical and responsible use of AI in the financial sector.

The committee, set up by the Reserve Bank of India in December 2024, has made 26 recommendations across six focus areas, including infrastructure, governance, and assurance.

It advised establishing a digital backbone to support homegrown AI models and forming a multi-stakeholder body to evaluate risks.

A dedicated fund to boost domestic AI development tailored for finance was also proposed, alongside audit guidelines and policy frameworks.

The committee recommended integrating AI into platforms such as UPI while preserving public trust and ensuring security.

Led by IIT Bombay’s Pushpak Bhattacharyya, the panel noted the need to balance innovation with risk mitigation in regulatory design.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Igor Babuschkin leaves Elon Musk’s xAI for AI safety investment push

Igor Babuschkin, cofounder of Elon Musk’s AI startup xAI, has announced his departure to launch an investment firm dedicated to AI safety research. Musk created xAI in 2023 to rival Big Tech, criticising industry leaders for weak safety standards and excessive censorship.

Babuschkin revealed his new venture, Babuschkin Ventures, will fund AI safety research and startups developing responsible AI tools. Before leaving, he oversaw engineering across infrastructure, product, and applied AI projects, and built core systems for training and managing models.

His exit follows that of xAI’s legal chief, Robert Keele, earlier this month, highlighting the company’s churn amid intense competition between OpenAI, Google, and Anthropic. The big players are investing heavily in developing and deploying advanced AI systems.

Babuschkin, a former researcher at Google DeepMind and OpenAI, recalled the early scramble at xAI to set up infrastructure and models, calling it a period of rapid, foundational development. He said he had created many core tools that the startup still relies on.

Last month, X CEO Linda Yaccarino also resigned, months after Musk folded the social media platform into xAI. The company’s leadership changes come as the global AI race accelerates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How Anthropic trains and tests Claude for safe use

Anthropic has outlined a multi-layered safety plan for Claude, aiming to keep it useful while preventing misuse. Its Safeguards team blends policy experts, engineers, and threat analysts to anticipate and counter risks.

The Usage Policy establishes clear guidelines for sensitive areas, including elections, finance, and child safety. Guided by the Unified Harm Framework, the team assesses potential physical, psychological, and societal harms, utilizing external experts for stress tests.

During the 2024 US elections, a TurboVote banner was added after detecting outdated voting info, ensuring users saw only accurate, non-partisan updates.

Safety is built into development, with guardrails to block illegal or malicious requests. Partnerships like ThroughLine help Claude handle sensitive topics, such as mental health, with care rather than avoidance or refusal.

Before launch, Claude undergoes safety, risk, and bias evaluations with government and industry partners. Once live, classifiers scan for violations in real time, while analysts track patterns of coordinated misuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Age checks slash visits to top UK adult websites

Adult site traffic in the UK has fallen dramatically since the new age verification rules were enacted on 25 July under the Online Safety Act.

Figures from analytics firm Similarweb show Pornhub lost more than one million visitors in just two weeks, with traffic falling by 47%. XVideos saw a similar drop, while OnlyFans traffic fell by more than 10%.

The rules require adult websites to make it harder for under-18s to access explicit material, leading some users to turn to smaller and less regulated sites instead of compliant platforms. Pornhub said the trend mirrored patterns seen in other countries with similar laws.

The clampdown has also triggered a surge in virtual private network (VPN) downloads in the UK, as the tools can hide a user’s location and help bypass restrictions.

Ofcom estimates that 14 million people in the UK watch pornography and has proposed age checks using credit cards, photo ID, or AI analysis of selfies.

Critics argue that instead of improving safety, the measures may drive people towards more extreme or illicit material on harder-to-monitor parts of the internet, including the dark web.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Netherlands regulator presses tech firms over election disinformation

The Netherlands’ competition authority will meet with 12 major online platforms, including TikTok, Facebook and X, on 15 September to address the spread before the 29 October elections.

The session will also involve the European Commission, national regulators and civil society groups.

The Authority for Consumers and Markets (ACM), which enforces the EU’s Digital Services Act in the Netherlands, is mandated to oversee election integrity under the law. The vote was called early in June after the Dutch government collapsed over migration policy disputes.

Platforms designated as Very Large Online Platforms must uphold transparent policies for moderating content and act decisively against illegal material, ACM director Manon Leijten said.

In July, the ACM contacted the platforms to outline their legal obligations, request details for their Trust and Safety teams and collect responses to a questionnaire on safeguarding public debate.

The September meeting will evaluate how companies plan to tackle disinformation, foreign interference and illegal hate speech during the campaign period.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk faces an OpenAI harassment lawsuit after a judge rejects dismissal

A federal judge has rejected Elon Musk’s bid to dismiss claims that he engaged in a ‘years-long harassment campaign’ against OpenAI.

US District Judge Yvonne Gonzalez Rogers ruled that the company’s counterclaims are sufficient to proceed as part of the lawsuit Musk filed against OpenAI and its CEO, Sam Altman, last year.

Musk, who helped found OpenAI in 2015, sued the AI firm in August 2024, alleging Altman misled him about the company’s commitment to AI safety before partnering with Microsoft and pursuing for-profit goals.

OpenAI responded with counterclaims in April, accusing Musk of persistent attacks in the press and on his platform X, demands for corporate records, and a ‘sham bid’ for the company’s assets.

The filing alleged that Musk sought to undermine OpenAI instead of supporting humanity-focused AI, intending to build a rival to take the technological lead.

The feud between Musk and Altman has continued, most recently with Musk threatening to sue Apple over App Store listings for X and his AI chatbot Grok. Altman dismissed the claim, criticising Musk for allegedly manipulating X to benefit his companies and harm competitors.

Despite the ongoing legal battle, OpenAI says it will remain focused on product development instead of engaging in public disputes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!