UK uses AI to fight drug-resistant infections

The UK is harnessing AI to combat the growing threat of drug-resistant infections, a crisis often called ‘the silent pandemic’. The Fleming Initiative and GSK will invest £45m in AI research to speed up new antibiotics and combat deadly bacteria and fungi.

The project targets Gram-negative bacteria, such as E. coli and Klebsiella, which resist treatment due to their protective outer layers. Researchers will test different molecules and use AI to identify which can penetrate and persist in these bacteria.

The goal is to shorten years of laboratory work into rapid computational predictions that guide the design of effective antibiotics.

AI will predict how resistant infections emerge and spread, helping scientists anticipate threats early. The initiative will also target deadly fungal infections, such as Aspergillus, which threaten people with weakened immune systems.

Experts hope the approach can outpace bacterial evolution and reduce the human toll from untreatable infections. Fleming Initiative director Alison Holmes emphasised the vital role of antibiotics in modern medicine and warned that overuse has squandered this critical resource.

Tony Wood, GSK’s chief scientific officer, said the project will open new avenues for discovering antibiotics while anticipating resistance, transforming the treatment and prevention of serious infections worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU aviation regulator opens debate on AI oversight and safety

EASA has issued its first regulatory proposal on AI in aviation, opening a three-month consultation for industry feedback. The draft focuses on trustworthy, data-driven AI systems and anticipates applications ranging from basic assistance to human–AI teaming.

The move comes amid wider criticism of EU AI rules from major tech firms and political leaders. Aviation stakeholders are now assessing whether compliance costs and operational demands could slow development or disrupt competitive positioning across the sector.

Experts warn that adapting to the framework may require significant investment, particularly for companies with limited resources. Others may accelerate AI adoption to preserve market advantage, especially where safety gains or efficiency improvements justify rapid deployment.

EASA stresses that consultation is essential to balance strict assurance requirements with the flexibility needed for innovation. Privacy and personal data issues remain contentious, shaping expectations for acceptable AI use in safety-critical environments.

Meanwhile, Airbus is pushing to reach 75 A320-family deliveries per month by 2027, driven by the A321neo’s strong order book. In parallel, Mitsui OSK Lines continues to lead the global LNG carrier market, reflecting broader momentum across adjacent transport sectors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Report calls for new regulations as AI deepfakes threaten legal evidence

US courtrooms increasingly depend on video evidence, yet researchers warn that the legal system is unprepared for an era in which AI can fabricate convincing scenes.

A new report led by the University of Colorado Boulder argues that national standards are urgently needed to guide how courts assess footage generated or enhanced by emerging technologies.

The authors note that judges and jurors receive little training on evaluating altered clips, despite more than 80 percent of cases involving some form of video.

Concerns have grown as deepfakes become easier to produce. A civil case in California collapsed in September after a judge ruled that a witness video was fabricated, and researchers believe such incidents will rise as tools like Sora 2 allow users to create persuasive simulations in moments.

Experts also warn about the spread of the so-called deepfake defence, where lawyers attempt to cast doubt on genuine recordings instead of accepting what is shown.

AI is also increasingly used to clean up real footage and to match surveillance clips with suspects. Such techniques can improve clarity, yet they also risk deepening inequalities when only some parties can afford to use them.

High-profile errors linked to facial recognition have already led to wrongful arrests, reinforcing the need for more explicit courtroom rules.

The report calls for specialised judicial training, new systems for storing and retrieving video evidence and stronger safeguards that help viewers identify manipulated content without compromising whistleblowers.

Researchers hope the findings prompt legal reforms that place scientific rigour at the centre of how courts treat digital evidence as it shifts further into an AI-driven era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfakes surge as scammers exploit AI video tools

Experts warn online video is entering a perilous new phase as AI deepfakes spread. Analysts say totals climbed from roughly 500,000 in 2023 to eight million in 2025.

Security researchers say deepfake scams have risen by more than 3,000 percent recently. Studies also indicate humans correctly spot high-quality fakes only around one in four times. People are urged to question surprising clips, verify stories elsewhere and trust their instincts.

Video apps such as Sora 2 create lifelike clips that fraudsters reuse for scams. Sora passed one million downloads and later tightened rules after racist deepfakes of Martin Luther King Jr.

Specialists at Outplayed suggest checking eye blinks, mouth movements and hands for subtle distortions. Inconsistent lighting, unnaturally smooth skin or glitching backgrounds can reveal manipulated or AI-generated video.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New report warns retailers are unprepared for AI-powered attacks

Retailers are entering the peak shopping season amid warnings that AI-driven cyber threats will accelerate. LevelBlue’s latest Spotlight Report says nearly half of retail executives are already seeing significantly higher attack volumes, while one-third have suffered a breach in the past year.

The sector is under pressure to roll out AI-driven personalisation and new digital channels, yet only a quarter feel ready to defend against AI attacks. Readiness gaps also cover deepfakes and synthetic identity fraud, even though most expect these threats to arrive soon.

Supply chain visibility remains weak, with almost half of executives reporting limited insight into software suppliers. Few list supplier security as a near-term priority, fuelling concern that vulnerabilities could cascade across retail ecosystems.

High-profile breaches have pushed cybersecurity into the boardroom, and most retailers now integrate security teams with business operations. Leadership performance metrics and risk appetite frameworks are increasingly aligned with cyber resilience goals.

Planned investment is focused on application security, business-wide resilience processes, and AI-enabled defensive tools. LevelBlue argues that sustained spending and cultural change are required if retailers hope to secure consumer trust amid rapidly evolving threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Vatican gathers global experts on AI and medicine

Medical professionals, ethicists and theologians gathered in the Vatican this week to discuss the ethical use of AI in healthcare. The conference, organised by the Pontifical Academy for Life and the International Federation of Catholic Medical Associations, highlighted the growing role of AI in diagnostics and treatment.

Speakers warned against reducing patient care to data alone, stressing that human interaction and personalised treatment remain central to medicine. Experts highlighted the need for transparency, non-discrimination and ethical oversight when implementing AI, noting that technology should enhance rather than replace human judgement.

The event also explored global experiences from regions including India, Latin America and Europe, with participants emphasising the role of citizens in shaping AI’s direction in medicine. Organisers called for ongoing dialogue between healthcare professionals, faith communities and technology leaders to ensure AI benefits patients while safeguarding human dignity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Mass CCTV hack in India exposes maternity ward videos sold on Telegram

Police in Gujarat have uncovered an extensive cybercrime network selling hacked CCTV footage from hospitals, schools, offices and private homes across India.

The case surfaced after local media spotted YouTube videos showing women in a maternity ward during exams and injections, with links directing viewers to Telegram channels selling longer clips. The hospital involved said cameras were installed to protect staff from false allegations.

Investigators say hackers accessed footage from an estimated 50,000 CCTV systems nationwide and sold videos for 800–2,000 rupees, with some Telegram channels offering live feeds by subscription.

Arrests since February span Maharashtra, Uttar Pradesh, Gujarat, Delhi and Uttarakhand. Police have charged suspects under laws covering privacy violations, publication of obscene material, voyeurism and cyberterrorism.

Experts say weak security practices make Indian CCTV systems easy targets. Many devices run on default passwords such as Admin123. Hackers used brute-force tools to break into networks, according to investigators. Cybercrime specialists advise users to change their IP addresses and passwords, run periodic audits, and secure both home and office networks.

The case highlights the widespread use of CCTV in India, as cameras are prevalent in both public and private spaces, often installed without consent or proper safeguards. Rights advocates say women face added stigma when sensitive footage is leaked, which discourages complaints.

Police said no patients or hospitals filed a report in this case due to fear of exposure, so an officer submitted the complaint.

Last year, the government urged states to avoid suppliers linked to past data breaches and introduced new rules to raise security standards, but breaches remain common.

Investigators say this latest case demonstrates how easily insecure systems can be exploited and how sensitive footage can spread online, resulting in severe consequences for victims.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

India’s data protection rules finally take effect

India has activated the Digital Personal Data Protection Act 2023 after extended delays. Final regulations notified in November operationalise a long-awaited national privacy framework. The Act, passed in August 2023, now gains a fully operational compliance structure.

Implementation of the rules is staggered so organisations can adjust governance, systems and contracts. Some provisions, including the creation of a Data Protection Board, take effect immediately. Obligations on consent notices, breach reporting and children’s data begin after 12 or 18 months.

India introduces regulated consent managers acting as a single interface between users and data fiduciaries. Managers must register with the Board and follow strict operational standards. Parents will use digital locker-based verification when authorising the processing of children’s information online.

Global technology, finance and health providers now face major upgrades to internal privacy programmes. Lawyers expect major work mapping data flows, refining consent journeys and tightening security practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google commits 40 billion dollars to expand Texas AI infrastructure

Google will pour 40 billion dollars into Texas by 2027, expanding digital infrastructure. Funding focuses on new cloud and AI facilities alongside existing campuses in Midlothian and Dallas.

Three new US data centres are planned, one in Armstrong County and two in Haskell County. One Haskell site will sit beside a solar plant and battery storage facility. Investment is accompanied by agreements for more than 6,200 megawatts of additional power generation.

Google will create a 30 million dollar Energy Impact Fund supporting Texan energy efficiency and affordability projects. The company backs training for existing electricians and over 1,700 apprentices through electrical training programmes.

Spending strengthens Texas as a major hub for data centres and AI development. Google says expanded infrastructure and workforce will help maintain US leadership in advanced computing technologies. Company highlights its 15 year presence in Texas and pledges ongoing community support.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New blueprint ensures fair AI in democratic processes

A rights-centred AI blueprint highlights the growing use of AI in analysing citizen submissions during public participation, promising efficiency but raising questions about fairness, transparency and human rights. Experts caution that poorly designed AI could silence minority voices, deepen inequalities and weaken trust in democratic decision-making.

The European Centre for Not-for-Profit Law (ECNL) provides detailed guidance for governments, civil society organisations and technology developers on how to implement AI responsibly. Recommendations include conducting human rights impact assessments, involving marginalised communities from the design stage, testing AI accuracy across demographics, and ensuring meaningful human oversight at every stage.

Transparency and accountability are key pillars of the framework, providing guidance on publishing assessments, documenting AI decision-making processes, and mitigating bias. Experts stress that efficiency gains should never come at the expense of inclusiveness, and that AI tools must be monitored and updated continually to reflect community feedback and rights considerations.

The blueprint also emphasises collaboration and sustainability, urging multistakeholder governance, civil society co-design, and ongoing training for public servants and developers. By prioritising rights, transparency and community engagement, AI in public participation can enhance citizen voices rather than undermining them, but only if implemented deliberately and inclusively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot