OpenAI invests in Merge Labs to advance brain-computer interfaces

The US AI company, OpenAI, has invested in Merge Labs as part of a seed funding round, signalling a growing interest in brain-computer interfaces as a future layer of human–technology interaction.

Merge Labs describes its mission as bridging the gap between biology and AI to expand human capability and agency. The research lab is developing new BCI approaches designed to operate safely while enabling much higher communication bandwidth between the brain and digital systems.

AI is expected to play a central role in Merge Labs’ work, supporting advances in neuroscience, bioengineering and device development instead of relying on traditional interface models.

High-bandwidth brain interfaces are also expected to benefit from AI systems capable of interpreting intent under conditions of limited and noisy signals.

OpenAI plans to collaborate with Merge Labs on scientific foundation models and advanced tools, aiming to accelerate research progress and translate experimental concepts into practical applications over time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Britain’s transport future tied to AI investment

AI is expected to play an increasingly important role in improving Britain’s road and rail networks. MPs highlighted its potential during a transport-focused industry summit in Parliament.

The Transport Select Committee chair welcomed government investment in AI and infrastructure. Road maintenance, connectivity and reduced delays were cited as priorities for economic growth.

UK industry leaders showcased AI tools that autonomously detect and repair potholes. Businesses said more intelligent systems could improve reliability while cutting costs and disruption.

Experts warned that stronger cybersecurity must accompany AI deployment. Safeguards are needed to protect critical transport infrastructure from external threats and misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UAE joins US led Pax Silica alliance

The United Arab Emirates has joined Pax Silica, a US-led alliance focused on AI and semiconductor supply chains. The move places Abu Dhabi among Washington’s trusted technology partners.

The pact aims to secure access to chips, computing power, energy and critical minerals. The US Department of State says technology supply chains are now treated as strategic assets.

UAE officials view the alliance as supporting economic diversification and AI leadership ambitions. Membership strengthens access to advanced semiconductors and large-scale data centre infrastructure.

Pax Silica reflects a broader shift in global tech diplomacy towards allied supply networks. Analysts say participation could shape future investment in AI infrastructure and manufacturing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft disrupts global RedVDS cybercrime network

Microsoft has launched a joint legal action in the US and the UK to dismantle RedVDS, a subscription service supplying criminals with disposable virtual computers for large-scale fraud. The operation with German authorities and Europol seized key domains and shut down the RedVDS marketplace.

RedVDS enabled sophisticated attacks, including business email compromise and real estate payment diversion schemes. Since March 2025, it has caused about US $40 million in US losses, hitting organisations like H2-Pharma and Gatehouse Dock Condominium Association.

Globally, over 191,000 organisations have been impacted by RedVDS-enabled fraud, often combined with AI-generated emails and multimedia impersonation.

Microsoft emphasises that targeting the infrastructure, rather than individual attackers, is key. International cooperation disrupted servers and payment networks supporting RedVDS and helped identify those responsible.

Users are advised to verify payment requests, use multifactor authentication, and report suspicious activity to reduce risk.

The civil action marks the 35th case by Microsoft’s Digital Crimes Unit, reflecting a sustained commitment to dismantling online fraud networks. As cybercrime evolves, Microsoft and partners aim to block criminals and protect people and organisations globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MIT tool combines AI and physics for 3D printing

MIT researchers have developed a generative AI system called MechStyle that allows users to personalise 3D-printed objects while ensuring they remain durable and functional.

The tool combines AI-driven design with physics simulations, allowing everyday items such as vases, hooks, and glasses to be customised without compromising structural integrity.

Users can upload their own 3D models or select presets and use text or image prompts to guide the design. MechStyle modifies the geometry and simulates stress points to maintain strength, enabling unique, tactile, and usable creations.

The system can personalise aesthetics while preserving functionality, even for assistive devices like finger splints and utensil grips.

To optimise performance, MechStyle employs an adaptive scheduling strategy that checks only the critical areas of a model, reducing computation time. Early tests of 30 objects, including designs resembling bricks, cacti, and stones, showed up to 100% structural viability.

The tool offers a freestyle mode for rapid experimentation and a careful mode for analysing the effects of modifications. Researchers plan to expand MechStyle to generate entirely new 3D models from scratch and improve faulty designs.

The project reflects collaboration with Google, Stability AI, and Northeastern University and was presented at the ACM Symposium on Computational Fabrication. Its potential extends to personal items, home and office décor, and even commercial prototypes for retail products.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EMA and FDA set AI principles for medicine

The European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) have released ten principles for good AI practice in the medicines lifecycle. The guidelines provide broad direction for AI use in research, clinical trials, manufacturing, and safety monitoring.

The principles are relevant to pharmaceutical developers, marketing authorisation applicants, and holders, and will form the basis for future AI guidance in different jurisdictions. EU guideline development is already underway, building on EMA’s 2024 AI reflection paper.

European Commissioner Olivér Várhelyi said the initiative demonstrates renewed EU-US cooperation and commitment to global innovation while maintaining patient safety.

AI adoption in medicine has grown rapidly in recent years. New pharmaceutical legislation and proposals, such as the European Commission’s Biotech Act, highlight AI’s potential to accelerate the development of safe and effective medicine.

A principles-based approach is seen as essential to manage risks while promoting innovation.

The EMA-FDA collaboration builds on prior bilateral work and aligns with EMA’s strategy to leverage data, digitalisation, and AI. Ethics and safety remain central, with a focus on international cooperation to enable responsible innovation in healthcare globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Why young people across South Asia turn to AI

Children and young adults across South Asia are increasingly turning to AI tools for emotional reassurance, schoolwork and everyday advice, even while acknowledging their shortcomings.

Easy access to smartphones, cheap data and social pressures have made chatbots a constant presence, often filling gaps left by limited human interaction.

Researchers and child safety experts warn that growing reliance on AI risks weakening critical thinking, reducing social trust and exposing young users to privacy and bias-related harms.

Studies show that many children understand AI can mislead or oversimplify, yet receive little guidance at school or home on how to question outputs or assess risks.

Rather than banning AI outright, experts argue for child-centred regulation, stronger safeguards and digital literacy that involves parents, educators and communities.

Without broader social support systems and clear accountability from technology companies, AI risks becoming a substitute for human connection instead of a tool that genuinely supports learning and wellbeing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-assisted money management adoption rises

Young adults in the UK are increasingly turning to AI for help with managing their finances, as many struggle to save and maintain control over spending.

A survey of 5,000 adults aged 28 to 40 found that impulse purchases and weak self-discipline frequently undermine savings, while most feel they could improve their financial knowledge.

AI-powered financial tools are gaining traction, particularly among those aged 28 to 34. Nearly two-thirds of respondents would trust AI to advise on disposable income, and over half would allow it to manage bills or prevent overdrafts.

However, nearly a quarter prefer to start with limited use, seeking proof of value before full engagement.

Regional differences highlight the uneven financial landscape in the UK. Londoners save significantly more than the national average, while cities such as Newcastle and Cardiff lag far behind.

Experts suggest fintech solutions must balance behavioural support with practical assistance and consider regional disparities to be effective.

Fintechs should prioritise tools that deliver immediate value over purely aspirational AI features. Modular tools and age- or region-specific solutions are likely to engage users, especially older millennials with rising financial responsibilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

X restricts Grok image editing after global backlash

Elon Musk’s X has limited the image editing functions of its Grok AI tool after criticism over the creation of sexualised images of real people.

The platform said technological safeguards have been introduced to block such content in regions where it is illegal, following growing concern from governments and regulators.

UK officials described the move as a positive step, although regulatory scrutiny remains ongoing.

Authorities are examining whether X complied with existing laws, while similar investigations have been launched in the US amid broader concerns over the misuse of AI-generated imagery.

International pressure has continued to build, with some countries banning Grok entirely instead of waiting for platform-led restrictions.

Policy experts have welcomed stronger controls but questioned how effectively X can identify real individuals and enforce its updated rules across different jurisdictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Winnipeg schools embrace AI as classroom learning tool

At General Wolfe School and other Winnipeg classrooms, students are using AI tools to help with tasks such as translating language and understanding complex terms, with teachers guiding them on how to verify AI-generated information against reliable sources.

Teachers are cautious but optimistic, developing a thinking framework that prioritises critical thinking and human judgement alongside AI use rather than rigid policies as the technology evolves.

Educators in the Winnipeg School Division are adapting teaching methods to incorporate AI while discouraging over-reliance, stressing that students should use AI as an aid rather than a substitute for learning.

This reflects broader discussions in education about how to balance innovation with foundational skills as AI becomes more commonplace in school environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!