Pakistan’s digital transformation highlighted as UNESCO advances AI ethics

UNESCO used the Pakistan Governance Forum 2026 to highlight the need for a structured Ethical AI and Data Governance Framework as the country accelerates its digital transformation.

Federal leaders, provincial authorities and civil society convened to examine governance reforms, with UNESCO urging Pakistan to align its expanding digital public infrastructure with coherent standards that protect rights while enabling innovation.

Speaking at the Forum, Fuad Pashayev underlined that Pakistan’s reform priority should centre on the Recommendation on the Ethics of Artificial Intelligence, adopted unanimously by all 193 Member States.

Anchoring national systems in transparency, accountability and meaningful human oversight was framed as essential for maintaining public trust as digital services reshape access to benefits and interactions between citizens and the state.

To support the shift, UNESCO promoted its AI Readiness Assessment Methodology (RAM), which is already deployed in more than 50 countries. The tool helps governments identify regulatory gaps, strengthen institutional coordination and design safeguards against discrimination and algorithmic bias.

UNESCO has already contributed to Pakistan’s draft National AI Policy, ensuring alignment with international ethical frameworks while accommodating national development needs.

Capacity building formed a major pillar of UNESCO’s engagement. In partnership with the University of Oxford, the organisation launched a global course on AI and Digital Transformation in Government in 2025, attracting over nineteen thousand enrolments worldwide.

Pakistan leads participation globally, reflecting both the country’s momentum and growing demand for structured training.

UNESCO’s ongoing work aims to reinforce data governance, improve AI readiness and embed ethical safeguards across Pakistan’s digital transformation strategy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Financial crime risks are reshaped by the rise of autonomous AI agents

Autonomous AI agents are transforming finance by executing transactions independently and speeding up workflows in digital assets and programmable finance. Software can manage wallets and move funds across blockchains in seconds, narrowing detection windows.

AI agents don’t create new crimes but increase speed and complexity, making accountability essential. Responsibility rests with developers, operators, and beneficiaries, with investigators tracing control, configuration, and economic benefit to determine liability.

Weak oversight or misconfigured rules can lead to significant compliance and enforcement consequences.

Investigations face new challenges as autonomous agents operate across multiple blockchains, decentralised exchanges, and global jurisdictions.

Real-time analytics and automated tracing are essential to link transactions to accountable actors before funds move. Governance architecture and monitoring systems increasingly serve as evidence in regulatory or criminal actions.

Institutions and law enforcement are using AI monitoring, anomaly detection, and automated containment systems. Autonomous AI impacts sanctions and national security, emphasising the need for human oversight alongside automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI becomes central to biotech discovery and drug development

The biotechnology industry is moving from early AI experimentation to fully integrated discovery systems that embed AI into everyday research operations.

According to the 2026 Biotech AI Report from Benchling, leading organisations are reshaping data environments and R&D structures, making AI a core part of the drug development process.

Predictive models, such as protein structure prediction and docking simulations, are accelerating early-stage discovery, helping scientists identify targets faster and improve accuracy.

Challenges persist in generative design, biomarker analysis, and ADME prediction, where adoption lags due to fragmented or poor-quality data.

Organisations overcoming these hurdles invest in high-quality, well-annotated measurements and strong integration between wet and dry lab work. It creates a continuous learning cycle that drives faster insights and reduces experimental dead ends.

Talent strategies are evolving to place AI expertise directly in R&D teams. Many firms upskill existing scientific staff to act as ‘scientific translators,’ bridging biology, regulatory needs, and machine learning.

Embedding AI leadership within research teams or using hybrid models reduces handoffs and ensures AI tools remain practical in real-world experiments.

Biotech firms combine in-house development with commercial components, following a ‘build what differentiates, buy what scales’ strategy. Confidence in AI is rising, driving investment in infrastructure, modelling, and integrated AI workflows for research.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI use among students surges as chatbots reshape schoolwork

More than half of US teenagers use AI tools to help with schoolwork, according to a new Pew Research Center study. The survey found that 54% of students aged 13 to 17 have used chatbots such as OpenAI’s ChatGPT or Microsoft’s Copilot to research assignments or solve maths problems.

Usage has risen in recent years. In 2024, 26% of US teens reported using ChatGPT for schoolwork, up from 13% in 2023. The latest survey of 1,458 teens and parents found 44% use AI for some schoolwork, while 10% rely on chatbots for most tasks.

Researchers say AI assistance is becoming routine in classrooms. Colleen McClain, a senior researcher at Pew and co-author of the report, said chatbot use for schoolwork is now a common practice among teens.

Findings come amid an intensifying debate over generative AI in education. Supporters argue that schools should teach students to use and evaluate AI tools, while critics warn of misinformation, reduced critical thinking, and increased cheating.

Recent research has raised questions about learning outcomes. One study by Cambridge University Press & Assessment and Microsoft Research found that students who took notes without chatbot support showed stronger reading comprehension than those using AI assistance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT Health under fire after study finds major failures in emergency detection

A new evaluation of ChatGPT Health has raised major safety concerns after researchers found it frequently failed to recognise urgent medical emergencies.

The independent study, published in Nature Medicine, reported that the system under-triaged more than half of the clinical scenarios tested, giving advice that could have delayed life-saving treatment.

The research team, led by Ashwin Ramaswamy, created sixty patient simulations ranging from minor illnesses to life-threatening conditions.

Three doctors agreed on the appropriate urgency for each case before comparing their judgement with the model’s responses. The AI performed adequately in straightforward emergencies such as strokes, yet frequently minimised danger in more complex presentations, including severe asthma and diabetic crises.

Experts also warned that ChatGPT Health struggled to detect suicidal ideation reliably. Minor changes to scenario details, such as adding normal lab results, caused safeguards to disappear entirely.

Critics, including health-misinformation researcher Alex Ruani, described the behaviour as dangerously inconsistent and capable of creating a false sense of security.

OpenAI said the study did not reflect typical real-world use but acknowledged the need for continued research and improvement.

Policy specialists argue that the findings underline the need for clear safety standards, external audits and stronger transparency requirements for AI systems operating in sensitive medical contexts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Galaxy S26 series brings powerful AI and privacy features

Samsung Electronics has unveiled the Galaxy S26 series, featuring advanced AI experiences, powerful performance, and an industry-leading camera system designed to simplify everyday smartphone tasks.

The series, which includes the Galaxy S26, S26+, and S26 Ultra, handles complex processes in the background, allowing users to focus on results rather than device operations.

The Galaxy S26 Ultra introduces the world’s first built-in Privacy Display, a redesigned chipset, and improved thermal management. Together, these upgrades enhance AI performance, graphics, and CPU efficiency, while ensuring faster, cooler, and more reliable operation throughout the day.

Photography and videography are also upgraded with wider apertures, Nightography Video, Super Steady video, and AI-powered editing tools that make professional-quality content accessible to all users.

Galaxy AI streamlines daily experiences by proactively suggesting actions, organising information, and automating tasks. Features such as Now Nudge, Now Brief, Circle to Search, and upgraded Bixby allow users to interact naturally with their devices.

Integrated AI agents, including Gemini and Perplexity, support multi-step tasks across apps, from booking services to advanced searches, all with minimal input.

Samsung has embedded multiple layers of security and privacy in the Galaxy S26 series. From AI-powered Call Screening and Privacy Alerts to Knox Vault, Knox Matrix, and post-quantum cryptography, users can control data access and protect personal information.

With long-term security updates, seamless software, and Galaxy Buds4 integration, the S26 series aims to combine performance, convenience, and safety in a single, intuitive device.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Scotland considers new offence for AI intimate images

The Scottish government has launched a consultation proposing a specific criminal offence for creating AI-generated intimate images without consent. Existing Scots law covers the sharing of such photos, but ministers in Scotland say gaps remain around their creation.

The consultation in Scotland also seeks views on criminalising digital tools designed solely to produce intimate images and videos. Ministers aim to address harms linked to emerging AI technologies affecting women and girls across Scotland.

Additional proposals in Scotland include a statutory aggravation where domestic abuse involves a pregnant woman, requiring courts to treat such cases more seriously at sentencing. Measures to strengthen protections against spiking offences are also under review in Scotland.

Justice Secretary Angela Constance said responses in Scotland would inform future action to reduce violence against women and girls. The consultation also considers changes to non-harassment orders and examines whether further laws on non-fatal strangulation are needed in Scotland.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Uni.lu expert urges schools to embrace AI

AI should be integrated into classrooms in Luxembourg rather than avoided, according to Gilbert Busana of the University of Luxembourg. Speaking to RTL Today in Luxembourg, he said ignoring AI would be a disservice to pupils and teachers alike.

Busana argued that AI should be taught both as a standalone subject and across disciplines in Luxembourg schools. Clear guidelines are needed to define when and how pupils may use AI, alongside transparency about its role in assignments.

He stressed that developing AI literacy in Luxembourg is essential to protect critical thinking. Assessment methods may shift away from focusing solely on final outputs towards evaluating the learning process itself.

Teachers in Luxembourg are increasingly becoming coaches rather than simple transmitters of knowledge. Busana said continuous professional training and collaboration within schools in Luxembourg will be vital as AI reshapes education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI misuse in online scams involving OpenAI models

OpenAI has reported new instances of its models being exploited in online scams and coordinated information campaigns. The company detailed actions to remove offending accounts and strengthen safeguards, highlighting misuse in fraud and deceptive content creation.

Several cases involved romance and ‘task’ scams, in which AI-generated messages built emotional engagement before requesting payment. One network, dubbed ‘Operation Date Bait,’ used chatbots to promote a fictitious dating service targeting young men in Indonesia.

Another, ‘Operation False Witness,’ saw actors posing as legal professionals to solicit advance fees for non-existent recovery services.

The report also outlined coordinated campaigns leveraging AI to produce articles, social media posts, and comments on geopolitical topics. In ‘Operation Trolling Stone,’ AI-generated content on a Russian arrest in Argentina was shared widely in multiple languages to mimic grassroots engagement.

OpenAI stressed that AI was sometimes used, but reach and account size largely drove engagement.

The company continues monitoring misuse and collaborates with partners and authorities to curb fraudulent or deceptive activity. Systems have been updated to decline policy-violating requests, and not all suspicious content online was generated using its tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Binance targets Greece as EU gateway

Efforts to secure a foothold in Europe have led Binance to select Greece as its entry point for operating under the EU’s Markets in Crypto-Assets framework. A licence would let the exchange offer services across the European Union when the rules take effect in July 2026.

Strategic considerations outweigh speed in the decision. Co-chief executive Richard Teng cited workforce quality, safety, and long-term growth potential as decisive factors, even though several larger EU economies have already issued more licences.

Regulatory attention continues to shape the company’s trajectory. Founder Changpeng Zhao remains a shareholder, as leadership says reforms aim to make the platform one of the most regulated exchanges globally.

Expansion plans unfold amid turbulent market conditions.  Bitcoin’s prices remain well below last year’s highs, dampening retail sentiment, yet institutional participation has remained resilient, supporting liquidity amid volatility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot