Qalb brings Urdu-language AI to Pakistan

Pakistan has launched its own Urdu-focused generative AI model, Qalb, trained on 1.97 billion tokens and evaluated across more than seven international benchmarking frameworks. The developers say the model outperforms existing Urdu-language systems on key real-world performance indicators.

With Urdu spoken by over 230 million people worldwide, Qalb aims to expand access to advanced AI tools in Pakistan’s national language. The model is designed to support local businesses, startups, education platforms, digital services, and voice-based AI agents.

Qalb was developed by a small team led by Taimoor Hassan, a serial entrepreneur who has launched and exited multiple startups and previously won the Microsoft Cup. He completed his undergraduate studies in computer science in Pakistan and is currently pursuing postgraduate education in the United States.

‘I had the opportunity to contribute in a small way to a much bigger mission for the country,’ Hassan said, noting that the project was built with his former university teammates Jawad Ahmed and Muhammad Awais. The group plans to continue refining localised AI models for specific industries.

The launch of Qalb highlights how smaller teams can develop advanced AI tools outside major technology hubs. Supporters say Urdu-first models could help drive innovation across Pakistan’s digital economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI invests in Merge Labs to advance brain-computer interfaces

The US AI company, OpenAI, has invested in Merge Labs as part of a seed funding round, signalling a growing interest in brain-computer interfaces as a future layer of human–technology interaction.

Merge Labs describes its mission as bridging the gap between biology and AI to expand human capability and agency. The research lab is developing new BCI approaches designed to operate safely while enabling much higher communication bandwidth between the brain and digital systems.

AI is expected to play a central role in Merge Labs’ work, supporting advances in neuroscience, bioengineering and device development instead of relying on traditional interface models.

High-bandwidth brain interfaces are also expected to benefit from AI systems capable of interpreting intent under conditions of limited and noisy signals.

OpenAI plans to collaborate with Merge Labs on scientific foundation models and advanced tools, aiming to accelerate research progress and translate experimental concepts into practical applications over time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware gang Everest claims data breach at Nissan Motor Corporation

Nissan Motor Corporation has been listed on the dark web by the Everest ransomware group, which is threatening to release allegedly stolen data within days unless a ransom is paid. The group claims to have exfiltrated around 900 gigabytes of company files.

Everest published sample screenshots showing folders linked to marketing, sales, dealer orders, warranty analysis, and internal communications. Many of the files appear to relate to Nissan’s operations in Canada, although some dealer records reference the United States.

Nissan has not issued a public statement about the alleged breach. The company has been contacted for comment, but no confirmation has been provided regarding the nature or scale of the incident.

Everest began as a ransomware operation in 2020 but is now believed to focus on gaining and selling network access using stolen credentials, insider recruitment, and remote access tools. The group is thought to be Russian-speaking and continues to recruit affiliates through its leak site.

The Nissan listing follows recent claims by Everest involving Chrysler and ASUS. In those cases, the group said it had stolen large volumes of personal and corporate data, with ASUS later confirming a supplier breach involving camera source code.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Britain’s transport future tied to AI investment

AI is expected to play an increasingly important role in improving Britain’s road and rail networks. MPs highlighted its potential during a transport-focused industry summit in Parliament.

The Transport Select Committee chair welcomed government investment in AI and infrastructure. Road maintenance, connectivity and reduced delays were cited as priorities for economic growth.

UK industry leaders showcased AI tools that autonomously detect and repair potholes. Businesses said more intelligent systems could improve reliability while cutting costs and disruption.

Experts warned that stronger cybersecurity must accompany AI deployment. Safeguards are needed to protect critical transport infrastructure from external threats and misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UAE joins US led Pax Silica alliance

The United Arab Emirates has joined Pax Silica, a US-led alliance focused on AI and semiconductor supply chains. The move places Abu Dhabi among Washington’s trusted technology partners.

The pact aims to secure access to chips, computing power, energy and critical minerals. The US Department of State says technology supply chains are now treated as strategic assets.

UAE officials view the alliance as supporting economic diversification and AI leadership ambitions. Membership strengthens access to advanced semiconductors and large-scale data centre infrastructure.

Pax Silica reflects a broader shift in global tech diplomacy towards allied supply networks. Analysts say participation could shape future investment in AI infrastructure and manufacturing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Belgian hospital AZ Monica hit by cyberattack

A cyberattack hit AZ Monica hospital in Belgium, forcing the shutdown of all servers, cancellation of scheduled procedures, and transfer of critical patients. The hospital network, with campuses in Antwerp and Deurne, provides acute, outpatient, and specialised care to the local population.

The attack was detected at 6:32 a.m., prompting staff to disconnect systems proactively. While urgent care continues, non-urgent consultations and surgeries have been postponed due to restricted access to the digital medical record.

Seven critical patients were safely transferred with Red Cross support.

Authorities and hospital officials have launched an investigation, notifying police and prosecutors. Details of the attack remain unclear, and unverified reports of a ransom demand have not been confirmed.

The hospital emphasised that patient safety and continuity of care are top priorities.

Cyberattacks on hospitals can severely disrupt medical services, delay urgent treatments, and put patients’ lives at risk, highlighting the growing vulnerability of healthcare systems to digital threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft disrupts global RedVDS cybercrime network

Microsoft has launched a joint legal action in the US and the UK to dismantle RedVDS, a subscription service supplying criminals with disposable virtual computers for large-scale fraud. The operation with German authorities and Europol seized key domains and shut down the RedVDS marketplace.

RedVDS enabled sophisticated attacks, including business email compromise and real estate payment diversion schemes. Since March 2025, it has caused about US $40 million in US losses, hitting organisations like H2-Pharma and Gatehouse Dock Condominium Association.

Globally, over 191,000 organisations have been impacted by RedVDS-enabled fraud, often combined with AI-generated emails and multimedia impersonation.

Microsoft emphasises that targeting the infrastructure, rather than individual attackers, is key. International cooperation disrupted servers and payment networks supporting RedVDS and helped identify those responsible.

Users are advised to verify payment requests, use multifactor authentication, and report suspicious activity to reduce risk.

The civil action marks the 35th case by Microsoft’s Digital Crimes Unit, reflecting a sustained commitment to dismantling online fraud networks. As cybercrime evolves, Microsoft and partners aim to block criminals and protect people and organisations globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MIT tool combines AI and physics for 3D printing

MIT researchers have developed a generative AI system called MechStyle that allows users to personalise 3D-printed objects while ensuring they remain durable and functional.

The tool combines AI-driven design with physics simulations, allowing everyday items such as vases, hooks, and glasses to be customised without compromising structural integrity.

Users can upload their own 3D models or select presets and use text or image prompts to guide the design. MechStyle modifies the geometry and simulates stress points to maintain strength, enabling unique, tactile, and usable creations.

The system can personalise aesthetics while preserving functionality, even for assistive devices like finger splints and utensil grips.

To optimise performance, MechStyle employs an adaptive scheduling strategy that checks only the critical areas of a model, reducing computation time. Early tests of 30 objects, including designs resembling bricks, cacti, and stones, showed up to 100% structural viability.

The tool offers a freestyle mode for rapid experimentation and a careful mode for analysing the effects of modifications. Researchers plan to expand MechStyle to generate entirely new 3D models from scratch and improve faulty designs.

The project reflects collaboration with Google, Stability AI, and Northeastern University and was presented at the ACM Symposium on Computational Fabrication. Its potential extends to personal items, home and office décor, and even commercial prototypes for retail products.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EMA and FDA set AI principles for medicine

The European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) have released ten principles for good AI practice in the medicines lifecycle. The guidelines provide broad direction for AI use in research, clinical trials, manufacturing, and safety monitoring.

The principles are relevant to pharmaceutical developers, marketing authorisation applicants, and holders, and will form the basis for future AI guidance in different jurisdictions. EU guideline development is already underway, building on EMA’s 2024 AI reflection paper.

European Commissioner Olivér Várhelyi said the initiative demonstrates renewed EU-US cooperation and commitment to global innovation while maintaining patient safety.

AI adoption in medicine has grown rapidly in recent years. New pharmaceutical legislation and proposals, such as the European Commission’s Biotech Act, highlight AI’s potential to accelerate the development of safe and effective medicine.

A principles-based approach is seen as essential to manage risks while promoting innovation.

The EMA-FDA collaboration builds on prior bilateral work and aligns with EMA’s strategy to leverage data, digitalisation, and AI. Ethics and safety remain central, with a focus on international cooperation to enable responsible innovation in healthcare globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Why young people across South Asia turn to AI

Children and young adults across South Asia are increasingly turning to AI tools for emotional reassurance, schoolwork and everyday advice, even while acknowledging their shortcomings.

Easy access to smartphones, cheap data and social pressures have made chatbots a constant presence, often filling gaps left by limited human interaction.

Researchers and child safety experts warn that growing reliance on AI risks weakening critical thinking, reducing social trust and exposing young users to privacy and bias-related harms.

Studies show that many children understand AI can mislead or oversimplify, yet receive little guidance at school or home on how to question outputs or assess risks.

Rather than banning AI outright, experts argue for child-centred regulation, stronger safeguards and digital literacy that involves parents, educators and communities.

Without broader social support systems and clear accountability from technology companies, AI risks becoming a substitute for human connection instead of a tool that genuinely supports learning and wellbeing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!