New report warns retailers are unprepared for AI-powered attacks

Retailers are entering the peak shopping season amid warnings that AI-driven cyber threats will accelerate. LevelBlue’s latest Spotlight Report says nearly half of retail executives are already seeing significantly higher attack volumes, while one-third have suffered a breach in the past year.

The sector is under pressure to roll out AI-driven personalisation and new digital channels, yet only a quarter feel ready to defend against AI attacks. Readiness gaps also cover deepfakes and synthetic identity fraud, even though most expect these threats to arrive soon.

Supply chain visibility remains weak, with almost half of executives reporting limited insight into software suppliers. Few list supplier security as a near-term priority, fuelling concern that vulnerabilities could cascade across retail ecosystems.

High-profile breaches have pushed cybersecurity into the boardroom, and most retailers now integrate security teams with business operations. Leadership performance metrics and risk appetite frameworks are increasingly aligned with cyber resilience goals.

Planned investment is focused on application security, business-wide resilience processes, and AI-enabled defensive tools. LevelBlue argues that sustained spending and cultural change are required if retailers hope to secure consumer trust amid rapidly evolving threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Vatican gathers global experts on AI and medicine

Medical professionals, ethicists and theologians gathered in the Vatican this week to discuss the ethical use of AI in healthcare. The conference, organised by the Pontifical Academy for Life and the International Federation of Catholic Medical Associations, highlighted the growing role of AI in diagnostics and treatment.

Speakers warned against reducing patient care to data alone, stressing that human interaction and personalised treatment remain central to medicine. Experts highlighted the need for transparency, non-discrimination and ethical oversight when implementing AI, noting that technology should enhance rather than replace human judgement.

The event also explored global experiences from regions including India, Latin America and Europe, with participants emphasising the role of citizens in shaping AI’s direction in medicine. Organisers called for ongoing dialogue between healthcare professionals, faith communities and technology leaders to ensure AI benefits patients while safeguarding human dignity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Mass CCTV hack in India exposes maternity ward videos sold on Telegram

Police in Gujarat have uncovered an extensive cybercrime network selling hacked CCTV footage from hospitals, schools, offices and private homes across India.

The case surfaced after local media spotted YouTube videos showing women in a maternity ward during exams and injections, with links directing viewers to Telegram channels selling longer clips. The hospital involved said cameras were installed to protect staff from false allegations.

Investigators say hackers accessed footage from an estimated 50,000 CCTV systems nationwide and sold videos for 800–2,000 rupees, with some Telegram channels offering live feeds by subscription.

Arrests since February span Maharashtra, Uttar Pradesh, Gujarat, Delhi and Uttarakhand. Police have charged suspects under laws covering privacy violations, publication of obscene material, voyeurism and cyberterrorism.

Experts say weak security practices make Indian CCTV systems easy targets. Many devices run on default passwords such as Admin123. Hackers used brute-force tools to break into networks, according to investigators. Cybercrime specialists advise users to change their IP addresses and passwords, run periodic audits, and secure both home and office networks.

The case highlights the widespread use of CCTV in India, as cameras are prevalent in both public and private spaces, often installed without consent or proper safeguards. Rights advocates say women face added stigma when sensitive footage is leaked, which discourages complaints.

Police said no patients or hospitals filed a report in this case due to fear of exposure, so an officer submitted the complaint.

Last year, the government urged states to avoid suppliers linked to past data breaches and introduced new rules to raise security standards, but breaches remain common.

Investigators say this latest case demonstrates how easily insecure systems can be exploited and how sensitive footage can spread online, resulting in severe consequences for victims.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

India’s data protection rules finally take effect

India has activated the Digital Personal Data Protection Act 2023 after extended delays. Final regulations notified in November operationalise a long-awaited national privacy framework. The Act, passed in August 2023, now gains a fully operational compliance structure.

Implementation of the rules is staggered so organisations can adjust governance, systems and contracts. Some provisions, including the creation of a Data Protection Board, take effect immediately. Obligations on consent notices, breach reporting and children’s data begin after 12 or 18 months.

India introduces regulated consent managers acting as a single interface between users and data fiduciaries. Managers must register with the Board and follow strict operational standards. Parents will use digital locker-based verification when authorising the processing of children’s information online.

Global technology, finance and health providers now face major upgrades to internal privacy programmes. Lawyers expect major work mapping data flows, refining consent journeys and tightening security practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google commits 40 billion dollars to expand Texas AI infrastructure

Google will pour 40 billion dollars into Texas by 2027, expanding digital infrastructure. Funding focuses on new cloud and AI facilities alongside existing campuses in Midlothian and Dallas.

Three new US data centres are planned, one in Armstrong County and two in Haskell County. One Haskell site will sit beside a solar plant and battery storage facility. Investment is accompanied by agreements for more than 6,200 megawatts of additional power generation.

Google will create a 30 million dollar Energy Impact Fund supporting Texan energy efficiency and affordability projects. The company backs training for existing electricians and over 1,700 apprentices through electrical training programmes.

Spending strengthens Texas as a major hub for data centres and AI development. Google says expanded infrastructure and workforce will help maintain US leadership in advanced computing technologies. Company highlights its 15 year presence in Texas and pledges ongoing community support.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New blueprint ensures fair AI in democratic processes

A rights-centred AI blueprint highlights the growing use of AI in analysing citizen submissions during public participation, promising efficiency but raising questions about fairness, transparency and human rights. Experts caution that poorly designed AI could silence minority voices, deepen inequalities and weaken trust in democratic decision-making.

The European Centre for Not-for-Profit Law (ECNL) provides detailed guidance for governments, civil society organisations and technology developers on how to implement AI responsibly. Recommendations include conducting human rights impact assessments, involving marginalised communities from the design stage, testing AI accuracy across demographics, and ensuring meaningful human oversight at every stage.

Transparency and accountability are key pillars of the framework, providing guidance on publishing assessments, documenting AI decision-making processes, and mitigating bias. Experts stress that efficiency gains should never come at the expense of inclusiveness, and that AI tools must be monitored and updated continually to reflect community feedback and rights considerations.

The blueprint also emphasises collaboration and sustainability, urging multistakeholder governance, civil society co-design, and ongoing training for public servants and developers. By prioritising rights, transparency and community engagement, AI in public participation can enhance citizen voices rather than undermining them, but only if implemented deliberately and inclusively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI supports doctors in spotting broken bones

Hospitals in Lincolnshire, UK, are introducing AI to assist doctors in identifying fractures and dislocations, with the aim to speeding up treatment and improving patient care. The Northern Lincolnshire and Goole NHS Foundation Trust will launch a two-year NHS England pilot later this month.

AI software will provide near-instant annotated X-rays alongside standard scans, highlighting potential issues for clinicians to review. Patients under the age of two, as well as those undergoing chest, spine, skull, facial or soft tissue imaging, will not be included in the pilot.

Consultants emphasise that AI is an additional tool, not a replacement, and clinicians will retain the final say on diagnosis and treatment. Early trials in northern Europe suggest the technology can help meet rising demand, and the trust is monitoring its impact closely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI Scientist Kosmos links every conclusion to code and citations

OpenAI chief Sam Altman has praised Future House’s new AI Scientist, Kosmos, calling it an exciting step toward automated discovery. The platform upgrades the earlier Robin system and is now operated by Edison Scientific, which plans a commercial tier alongside free access for academics.

Kosmos addresses a key limitation in traditional models: the inability to track long reasoning chains while processing scientific literature at scale. It uses structured world models to stay focused on a single research goal across tens of millions of tokens and hundreds of agent runs.

A single Kosmos run can analyse around 1,500 papers and more than 40,000 lines of code, with early users estimating that this replaces roughly six months of human work. Internal tests found that almost 80 per cent of its conclusions were correct.

Future House reported seven discoveries made during testing, including three that matched known results and four new hypotheses spanning genetics, ageing, and disease. Edison says several are now being validated in wet lab studies, reinforcing the system’s scientific utility.

Kosmos emphasises traceability, linking every conclusion to specific code or source passages to avoid black-box outputs. It is priced at $200 per run, with early pricing guarantees and free credits for academics, though multiple runs may still be required for complex questions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Digital accessibility drives revenue as AI adoption rises

Research highlights that digital accessibility is now viewed as a driver of business growth rather than a compliance requirement.

A survey of over 1,600 professionals across the US, UK, and Europe found 75% of organisations linking accessibility improvements to revenue gains, while 91% reported enhanced user experience and 88% noted brand reputation benefits.

AI is playing an increasingly central role in accessibility initiatives. More than 80% of organisations now use AI tools to support accessibility, particularly in mature programmes with formal policies, accountability structures, and dedicated budgets.

Leaders in these organisations view AI as a force multiplier, complementing human expertise rather than replacing it. Despite progress, many organisations still implement accessibility late in digital development processes. Only around 28% address accessibility during planning, and 27% during design stages.

Leadership support and effective training emerged as key success factors. Organisations with engaged executives and strong accessibility training were far more likely to achieve revenue and operational benefits while reducing perceived legal risk.

As AI adoption accelerates and regulatory frameworks expand, companies treating accessibility strategically are better positioned to gain competitive advantage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Eurofiber France reportedly hit by data breach

Eurofiber France has suffered a data breach affecting its internal ticket management system and ATE customer portal, reportedly discovered on 13 November. The incident allegedly involved unauthorised access via a software vulnerability, with the full extent still unclear.

Sources indicate that approximately 3,600 customers could be affected, including major French companies and public institutions. Reports suggest that some of the allegedly stolen data, ranging from documents to cloud configurations, may have appeared on the dark web for sale.

Eurofiber has emphasised that Dutch operations are not affected.

The company moved quickly to secure affected systems, increasing monitoring and collaborating with cybersecurity specialists to investigate the incident. The French privacy regulator, CNIL, has been informed, and Eurofiber states that it will continue to update customers as the investigation progresses.

Founded in 2000, Eurofiber provides fibre optic infrastructure across the Netherlands, Belgium, France, and Germany. Primarily owned by Antin Infrastructure Partners and partially by Dutch pension fund PGGM, the company remains operational while assessing the impact of the breach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot