New India partnership targets AI innovation and digital transformation

Broadcast Engineering Consultants India Limited (BECIL) and the Centre for Development of Advanced Computing (C-DAC) have signed a Memorandum of Understanding to collaborate on advanced technologies and digital transformation. The agreement focuses on joint projects, consultancy, and technical support across sectors.

The partnership covers AI, machine learning, Internet of Things, cybersecurity, 5G, and cloud computing. It also includes the development of turnkey solutions, technology transfer, and the commercialisation of innovative products.

Capacity development is a key component of the collaboration. Both organisations will support workforce upskilling and skill development to strengthen technical capabilities.

Officials stated that the partnership aims to leverage complementary strengths to deliver technology solutions. It is also expected to support innovation and contribute to India’s broader digital development objectives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Health queries dominate AI chatbot use, study finds

A large-scale study analysing more than 500,000 health-related conversations with Microsoft Copilot offers a detailed look at how people are using general-purpose AI chatbots for medical information, symptom questions, and healthcare navigation.

Published in Nature Health, the study suggests that conversational AI is increasingly being used as an early point of contact for health concerns outside formal clinical settings.

The largest share of conversations fell into the health information and education category, accounting for 40.7% of the sample. Users frequently asked about symptoms, conditions, nutrition, treatments, and medicines, often in ways that reflected personal concerns rather than detached information-seeking.

The study found that 18.8% of conversations involved users discussing their own health conditions, while roughly one in seven personal health queries concerned someone else, such as a child, partner, or parent.

Patterns of use also varied by device and time of day. Mobile users were more likely to ask personal and emotionally sensitive questions, particularly about symptoms and well-being, with activity rising in the evening and overnight.

Desktop use, by contrast, was more closely associated with work, study, and administrative tasks, including research, documentation, and medical paperwork during office hours.

The study also points to growing use of AI for practical healthcare navigation. Beyond questions about symptoms or conditions, users turned to Copilot for help with appointments, provider access, paperwork, and understanding parts of the healthcare system that can be difficult to navigate. That suggests people are not using chatbots only for medical curiosity, but also to manage the bureaucratic and logistical side of care.

The broader significance of the findings lies in what they reveal about the changing role of conversational AI in everyday health decision-making. General-purpose chatbots are not replacing clinicians, but they are increasingly occupying the space before, between, and around formal care, where people seek quick explanations, reassurance, and guidance.

That makes questions of accuracy, safety, and health literacy more important, especially when users may act on AI-generated responses without professional context or oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

WHO launches AI Community of Practice for emergency response surveillance

The World Health Organization Regional Office for the Eastern Mediterranean has launched a Community of Practice on AI for disaster and emergency response surveillance through the WHO Collaboratory platform.

According to the organisation, the initiative brings together national authorities, practitioners, researchers, partners, and WHO staff to share knowledge, build capacity, and develop practical guidance on the use of AI in surveillance, early warning, risk assessment, and operational response.

WHO says the Community of Practice is part of its AI Literacy Programme and is intended to strengthen national and regional capacity to evaluate, adopt, govern, and scale AI tools during disasters and health emergencies. Members will have access to training modules, peer-to-peer learning, technical working groups, and a repository of best practices and tested guidance.

The organisation states that the platform prioritises the ethical, equitable, and transparent use of AI in line with its standards. Dr Annette Heinzelmann, WHO Regional Emergency Director, a.i., said:

At WHO, we advocate for the science-driven use of artificial intelligence in public health response, especially during emergencies.

Heizelmann added:

Our priority is to ensure these technologies are applied in ways that are safe, ethical and grounded in public health needs. This initiative reflects our commitment to supporting Member States in translating innovation into faster, more effective emergency response.

WHO says it launched the All-Hazards Information Management Toolkit last year as an AI-powered tool to support emergency information management, including rapid risk assessments, response plans, monitoring tools, and situation reports. According to WHO, participants from 20 countries were trained in the use of the toolkit and in AI literacy for emergency preparedness and surveillance.

Dr Oliver Morgan, Head of the WHO Hub for Pandemic and Epidemic Intelligence, said: ‘Artificial intelligence has enormous potential in public health, but its impact depends on how responsibly and effectively it is applied.’

Morgan expanded: ‘At the WHO Hub in Berlin, we develop innovative tools and bring experts together through initiatives like the Collaboratory to support countries and regions to detect health threats faster and respond more effectively. This Community of Practice helps ensure AI solutions move beyond pilots and into real-world emergency response, where speed, trust and usability matter most.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU proposes new Google search-data sharing measures under DMA

The European Commission has set out proposed measures that would require Google to share key search data with third-party providers under the Digital Markets Act (DMA), in a fresh step to open Europe’s online search market to greater competition. The move comes in the form of preliminary findings sent to Google, rather than a final decision, and is now subject to public consultation.

Under the proposal, Google would have to provide access to anonymised search data, including ranking, query, click, and view data, on fair, reasonable, and non-discriminatory terms. According to the Commission, the aim is to allow third-party search engines to improve their services and better challenge Google Search’s market position.

The proposed measures go beyond a general obligation to share data. They set out detailed conditions covering who should qualify for access, what data must be made available, how frequently it should be shared, how personal data should be anonymised, how pricing should be set, and how access procedures should work in practice. The consultation also explicitly includes companies offering online search services that incorporate AI chatbot functionality, showing that the case could shape competition not only in traditional search but also in AI-assisted search services.

The consultation is tied to Article 6(11) of the DMA, which requires gatekeepers operating online search engines to share certain anonymised data with other search engines under FRAND terms. The Commission says it opened proceedings against Alphabet in January 2026 to specify how Google should comply with that obligation in practice.

Brussels is now asking stakeholders to comment on whether the proposed framework would work in practice, whether the anonymised data would remain useful enough to help rivals improve their services, whether additional measures are needed, and whether the implementation timeline is realistic. The consultation opened on 16 April 2026 and will run until 1 May 2026, with the Commission expecting to adopt a final decision by 27 July 2026.

The case is significant because it shows the DMA moving from broad obligations to detailed implementation. Rather than debating only whether large platforms should share data, the Commission is now trying to define what meaningful access would look like in operational terms, including what must be handed over, on what conditions, and with what privacy safeguards. In that sense, the Google case may become an important test of how far the DMA can reshape competition in digital search markets and related AI services.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Paraguay introduces AI rules for courts with UNESCO support and human oversight focus

UNESCO has supported Paraguay in developing a regulatory framework governing the use of AI within its judicial system.

The policy, adopted by the Supreme Court of Justice of Paraguay, establishes clear limits on AI use, ensuring that such systems function strictly as support tools rather than replacing human decision-making.

A regulation that outlines principles for the application of AI in data processing, information management and assisted decision-making. It emphasises transparency, accountability and respect for fundamental rights, requiring disclosure when AI tools influence judicial processes.

The framework aligns with UNESCO’s global guidelines on AI in courts, which promote human oversight, auditability and the protection of rights throughout the lifecycle of AI systems.

Implementation has been supported through technical cooperation, including training programmes to strengthen institutional capacity.

Such an approach in Paraguay reflects a broader trend towards embedding ethical safeguards in AI governance within public institutions. It highlights the role of international cooperation in shaping regulatory models that balance innovation with legal certainty and public trust.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Canada’s cyber resilience plan targets AI-driven threats to critical infrastructure

A new initiative to strengthen national resilience has been launched by the Canadian Centre for Cyber Security against escalating cyber threats targeting critical infrastructure.

The programme, titled CIREN (Critical Infrastructure Resilience and Escalated Threat Navigation), aims to prepare organisations for severe disruptions by improving readiness, response capacity, and long-term recovery planning.

An initiative that reflects growing concern within Communications Security Establishment Canada over increasingly sophisticated cyber risks, including those amplified by AI.

Authorities highlight that both state-sponsored and criminal actors are exploiting automation and AI to accelerate attacks, raising the stakes for sectors such as energy, telecommunications, transport, and water systems.

CIREN outlines a structured approach centred on operational continuity during extreme scenarios.

Organisations are encouraged to prepare for prolonged isolation of critical systems, develop independent operating capabilities, and establish recovery frameworks capable of rebuilding infrastructure after major incidents. The focus remains on maintaining essential services under worst-case conditions.

The programme forms part of a broader national strategy in Canada to enhance cyber readiness through collaboration, threat intelligence, and practical guidance.

Officials stress that proactive planning and simplified defensive measures can significantly reduce real-world impact, particularly as cyber incidents grow in frequency, scale, and complexity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ILO warns against treating AI exposure indicators as job-loss forecasts

A new brief from the International Labour Organisation argues that AI exposure indicators should not be treated as forecasts of job losses, even as they become a more common tool for assessing how artificial intelligence could reshape work.

According to the ILO, these indicators can help identify where jobs may be affected by AI. Still, they do not show whether workers will actually be displaced or how labour markets will adjust in practice.

The brief examines how different exposure measures are constructed and why they often produce different results. Earlier approaches to automation focused mainly on routine and lower-skilled work, while newer AI-related models point to greater exposure in higher-skilled cognitive occupations, including roles in finance, computing, business, and education. That shift reflects the growing capacity of AI systems to perform tasks once seen as less vulnerable to automation.

The ILO stresses that exposure does not necessarily lead to job loss. Most indicators rely on static task descriptions and estimate what may be technically feasible, rather than what employers will actually adopt or what makes economic sense. They do not capture whether automation is profitable, whether it improves productivity, or how firms, workers, and institutions may respond over time.

The brief also argues that AI-related disruption is unlikely to stay confined to a narrow set of occupations. Jobs are linked through shared skills, career mobility, and workplace structures, meaning that changes in one part of the labour market can influence broader employment patterns elsewhere. That makes simple occupation-by-occupation risk scores less useful on their own than they may appear.

For that reason, the ILO says exposure indicators should be used as early warning signals rather than stand-alone labour market forecasts. It recommends combining them with evidence on employment, wages, job transitions, and broader economic and institutional conditions to build a more realistic picture of how AI is affecting work.

The broader significance of the brief is that it pushes back against the simplest narratives about AI and employment. Rather than asking how many jobs AI will eliminate, the ILO is urging policymakers to focus on where work may change, how quickly adoption may happen, and what kinds of institutions, skills, and labour protections will shape the outcome.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

AI needs digital public infrastructure to work for citizens, World Economic Forum says

The World Economic Forum says AI will only improve public services at scale if governments build on strong digital public infrastructure rather than fragmented systems and isolated pilot projects.

In a new analysis, the WEF points to digital identity, payments, and data exchange as the core layers that already support service delivery in many countries.

It argues that AI can make those systems more responsive by speeding up tasks such as identity verification, record retrieval, and payment processing.

But the Forum also warns that combining AI with digital public infrastructure will not work without clear safeguards. Interoperability, trust, and consent-based data use are presented as essential to making AI systems effective across public institutions while protecting users.

The wider message is that AI in government is no longer just a question of adoption. For countries hoping to scale public-sector AI, the bigger challenge is whether the underlying digital infrastructure is strong enough to support it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Study suggests AI reliance may weaken short-term problem-solving

A recent study by researchers from Carnegie Mellon University, the University of Oxford, MIT, and UCLA suggests that reliance on AI for basic tasks may temporarily weaken cognitive performance.

Participants who used AI tools to complete simple maths and reading exercises initially performed better than those working without assistance. However, once the technology was removed, their accuracy declined, and they were less likely to persist with the tasks.

The findings suggest that even brief exposure to AI support can reduce a person’s willingness to engage in sustained problem-solving, which remains essential to learning and skill development.

Researchers found that participants became more likely to abandon tasks and less able to complete them independently after relying on AI assistance.

The results add to wider concerns about how AI may be reshaping learning habits and intellectual development. Related research from MIT has described a phenomenon called ‘cognitive debt’, in which heavy reliance on AI tools may weaken retention, understanding, and independent reasoning over time.

Taken together, the studies point to a growing tension in AI design. While such tools can improve speed and convenience, they may also reduce the mental effort needed to build lasting cognitive skills. That suggests AI systems may need to be designed to support learning without replacing independent thought altogether.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK Defence Innovation opens Biosecurity Frontiers competition with up to £2 million

UK Defence Innovation has opened the Biosecurity Frontiers themed competition, run by the Cabinet Office on behalf of the UK government, and is seeking innovative proposals to help deliver the ambitions of the 2023 UK Biological Security Strategy and the 2025 National Security Strategy.

The competition document states that proposals may be used by multiple government departments, sectors, and frontline users, including the police, the military, and NHS/public health bodies.

Up to £2 million excluding VAT is available, with the government expecting to fund five to seven proposals across three challenge areas: biodetection and biosurveillance; AI and diagnostics, therapeutics, and vaccines; and non-pharmaceutical protective systems.

Individual awards are expected to be in the region of £100,000 to £500,000, though the document states proposals at higher or lower values may also be funded.

The submission deadline is 12:00 midday BST on 10 June 2026. Projects are expected to start in September 2026 and run for no longer than 12 months. Proposals must progress through at least one Technology Readiness Level. For Challenges 1 and 3, projects must reach Technology Readiness Level (TRL) 4-6, while Challenge 2 projects may reach TRL 7.

For biodetection and biosurveillance, the competition seeks capabilities to detect and monitor traditional and novel biological threats, including portable surveillance technologies, computational tools for analysing complex datasets, and permanently installed air surveillance systems in high-footfall locations.

For AI and diagnostics, therapeutics, and vaccines, the document refers to AI-based support for identifying and developing new diagnostic, therapeutic, and vaccine candidates, including structure-based discovery and development tools.

For non-pharmaceutical protective systems, the competition covers lower-cost personal protective equipment, respiratory protective equipment with improved fit, decontamination and disinfection approaches, biodegradable PPE materials, and solutions that remove humans from operations in contaminated areas. The competition document says it is funded by the Integrated Security Fund, which supports priority national security themes in the UK 2025 National Security Strategy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!