AI misuse exposed as OpenAI details global disinformation and scam networks

OpenAI said criminal and state-linked groups misused ChatGPT for disinformation, scams and covert influence. Its latest threat report details coordinated account bans and highlights how AI tools are embedded within broader operational workflows rather than used in isolation.

One investigation linked accounts to Chinese law enforcement engaged in what were described as ‘cyber special operations’. Activities included planning influence campaigns, mass-reporting dissidents and drafting forged materials, with related efforts continuing through other tools despite model refusals.

The report also outlined a Cambodia-based romance scam targeting young men in Indonesia through a fake dating agency. Operators combined manual prompting with automated chatbots to sustain conversations and facilitate financial fraud, leading to account removals.

Separately, accounts tied to Russia’s ‘Rybar’ network used ChatGPT to draft and translate posts distributed across multiple platforms. OpenAI noted that campaign impact depended more on account reach and coordination than on AI-generated content alone.

Across China, Russia and parts of Southeast Asia, actors treated AI as one tool among many, alongside fake profiles, paid advertising and forged documents. OpenAI called for cross-industry vigilance, stressing the need to analyse behavioural patterns across platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta AI flood of unusable abuse tips overwhelms US investigators

Investigators in the US say that AI used by Meta is flooding child protection units with large volumes of unhelpful reports, thereby draining resources rather than assisting ongoing cases.

Officers in the Internet Crimes Against Children network told a New Mexico court that most alerts generated by the company’s platforms lack essential evidence or contain material that is not criminal, leaving teams unable to progress investigations.

Meta rejects the claim that it prioritises profit, stressing its cooperation with law enforcement and highlighting rapid response times to emergency requests.

Its position is challenged by officers who say the volume of AI-generated alerts has doubled since 2024, particularly after the Report Act broadened reporting obligations.

They argue that adolescent conversations and incomplete data now form a sizeable portion of the alerts, while genuine cases of child sexual abuse material are becoming harder to detect.

Internal company documents disclosed at trial show Meta executives raising concerns as early as 2019 about the impact of end-to-end encryption on the firm’s ability to identify child exploitation and support investigators.

Child safety groups have long warned that encryption could limit early detection, even though Meta says it has introduced new tools designed to operate safely within encrypted environments.

The growing influx of unusable tips is taking a heavy toll on investigative teams. Officers in the US say each report must still be reviewed manually, despite the low likelihood of actionable evidence, and this backlog is diminishing morale at a time when they say resources have not kept pace with demand.

They warn that meaningful cases risk being delayed as units struggle with a workload swollen by AI systems tuned to avoid regulatory penalties rather than investigative value.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK enforces mandatory ETA as digital border era begins

Non-visa nationals are now barred from entering the UK, as the country has begun enforcing mandatory digital permission through the Electronic Travel Authorisation.

Travellers from 85 nations, including the US, Canada and France, must obtain an ETA before departure; otherwise, airlines will prevent them from boarding rather than allow last-minute checks at the border. The authorisation costs £16 and remains valid for two years or until a passport expires.

British and Irish citizens remain exempt but must present valid proof of status when travelling. Authorities say the scheme brings the UK into line with similar systems used by the US and the EU.

The Home Office emphasises that the measure strengthens border security and supports a modern, efficient entry process designed to benefit both visitors and the wider public.

A requirement that also applies to travellers passing through the UK to take connecting flights, reinforcing the shift toward a fully digital immigration system.

Over 19 million people have already used the ETA since its launch in 2023, generating significant revenue that is being reinvested in broader border improvements. Officials argue that the momentum paves the way for a future contactless border, supported by the steady transition from physical documents to eVisas.

From 26 February, Certificates of Entitlement will also be issued digitally, creating a single record that no longer expires with a passport.

Most ETA applications are processed automatically within minutes, allowing short-notice trips to remain possible. However, authorities still recommend applying up to 3 working days in advance to avoid delays for the small number of cases that require additional review.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UAE builds sovereign financial cloud

The Central Bank of the UAE has partnered with Abu Dhabi-based AI company Core42 to develop a sovereign financial cloud infrastructure in the UAE. The system is designed to ensure data sovereignty and strengthen protection against cyber threats.

According to the Central Bank of the UAE, the platform will operate on a centralised, highly secure and isolated infrastructure. It aims to support continuous financial services while boosting operational agility across the UAE.

The infrastructure will be powered by AI and provide automation and real-time data analysis for licensed institutions in the UAE. It will also enable unified management of multi-cloud services within a single regulatory framework.

Core42, established by G42 in 2023, said finance must remain sovereign as it relies on digital infrastructure. The Central Bank of the UAE described the project as a key pillar of its financial infrastructure transformation programme.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Conduent breach exposes data of 25 million people across US

More than 25 million people across the United States have had personal information exposed following a ransomware attack on government contractor Conduent. Updated state breach notifications indicate the incident is larger than initially understood.

Conduent provides printing, payment processing, and benefit administration services for state agencies and large corporations. Its systems support food assistance, unemployment benefits, and workplace programmes, reaching more than 100 million individuals, according to the company.

US State disclosures show Oregon and Texas account for most of the affected records, with additional cases reported in Massachusetts, New Hampshire, and Washington. Compromised data includes names, dates of birth, addresses, Social Security numbers, health insurance information, and medical details.

Public information from Conduent has been limited since the January 2025 attack. An incident notice published in October carried a ‘noindex’ tag in its source code, preventing search engines from listing the page, which critics say reduced visibility for affected individuals.

The breach ranks among the largest recent ransomware incidents, though it is smaller than the 2024 Change Healthcare attack that affected 190 million people. Regulators and affected users continue seeking clarity on the Conduent case and its security failures.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic faces data theft claims from Musk

Elon Musk, CEO of Tesla and xAI, has publicly accused Anthropic of stealing large volumes of data to train its AI models. The allegation was made on X in response to posts referencing Community Notes attached to Anthropic-related content.

Musk claimed the company had engaged in large-scale data theft and suggested that it had paid multi-billion-dollar settlements. Those financial claims remain contested, and no official confirmation has been provided to substantiate the figures.

Anthropic, known for developing the Claude AI model, was founded by former OpenAI employees and promotes an approach centred on AI safety and responsible development. The company has not publicly responded to Musk’s latest accusations.

The dispute reflects a broader conflict across the AI industry over how companies collect the text, images and other materials required to train large language models. Much of this data is scraped from the internet, often without explicit permission from rights holders.

Multiple lawsuits filed by authors, media organisations and software developers are testing whether large-scale scraping qualifies as fair use under copyright law. Court rulings in these cases could reshape licensing practices, impose financial penalties, and alter the economics of AI development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-enhanced electronic nose shows promise for early ovarian cancer detection

Scientists are combining AI with advanced sensor technology, commonly known as an electronic nose, to detect subtle patterns in volatile organic compounds (VOCs) associated with ovarian cancer.

The AI component improves the system’s ability to differentiate disease-specific chemical fingerprints from benign or background VOC profiles, increasing sensitivity and specificity compared with earlier sensor-only approaches.

Ovarian cancer is notoriously difficult to diagnose in early stages due to vague symptoms and a lack of reliable screening tools. The AI-boosted electronic nose aims to fill this gap by analysing breath, urine, or blood headspace samples in a non-invasive manner, with the potential to be deployed in clinical or even point-of-care settings.

Early experimental results suggest that regressing VOC patterns using machine learning models can distinguish ovarian cancer cases with greater accuracy than traditional methods alone. However, larger clinical validation studies are still underway.

Researchers emphasise that this technology is intended as a screening and triage tool to flag individuals for more definitive diagnostics, not as a standalone diagnostic test at present.

If successfully scaled and validated, AI-enhanced VOC detection could lead to earlier interventions and improved survival outcomes for patients with ovarian cancer.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CarGurus data leak surfaces as ShinyHunters publishes archive

The ShinyHunters extortion group has published a 6.1GB archive, which it claims contains more than 12 million records stolen from CarGurus, a US-based automotive platform. Have I Been Pwned listed the dataset, reporting that roughly 3.7 million records appear to be new.

The exposed information includes email addresses, IP addresses, full names, phone numbers, physical addresses, user account IDs, and finance-related application data belonging to CarGurus users. Dealer account details and subscription information were also reportedly included in the archive.

CarGurus has not issued a public statement confirming a breach. However, Have I Been Pwned said it attempts to verify the authenticity of datasets before adding them to its database, suggesting a level of validation of the leaked material.

Security experts warn that the availability of the data could increase the risk of phishing. Users are advised to remain cautious of unsolicited communications and potential scams that may leverage the exposed personal information.

ShinyHunters has recently claimed attacks against multiple large organisations across telecoms, fintech, retail, and media. The group is known for using social engineering tactics, including voice phishing and malicious OAuth applications, to gain access to SaaS platforms and extract customer data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI helps researchers see the bigger picture in cell biology

Scientists at Massachusetts Institute of Technology (MIT) report progress in applying AI to integrate and interpret diverse biological datasets, helping overcome key challenges in cell biology research.

Traditional experimental approaches often generate fragmented data, such as gene expression profiles, imaging, and molecular interactions, that are difficult to combine into a coherent view of cellular systems.

By contrast, AI models can learn patterns across multiple data types, reveal connections between disparate datasets, and generate holistic representations of cell behaviour that would otherwise require extensive manual synthesis.

The new AI techniques allow researchers to uncover relationships between genes, proteins and cellular processes with greater clarity, enabling improved hypothesis generation, experimental design and understanding of complex biological phenomena such as development, disease progression and response to therapies.

Because these AI tools can help prioritise experimental directions and reduce reliance on trial-and-error studies, they may accelerate breakthroughs in areas ranging from immunology to cancer biology.

Researchers emphasise that AI complements, rather than replaces, traditional biological expertise, acting as a data-driven partner that expands scientists’ ability to see the ‘bigger picture’ across scales and contexts.

Ethical and methodological considerations also underscore the importance of validating AI-generated insights with rigorous experiments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How multimodal sensing powers physical AI

Multimodal sensing allows physical AI systems to combine inputs such as vision, audio, lidar and touch to build situational awareness in real time. The approach enables machines to operate autonomously in complex physical environments.

The architecture typically includes input modules for individual sensors, a fusion module to combine relevant data, and an output module to generate actions. Applications range from robotics and autonomous vehicles to spatial AI systems navigating dynamic 3D spaces.

Fusion techniques vary by use case, from Bayesian networks for uncertainty management to Kalman filters for navigation and neural networks for robotic manipulation. The aim is to leverage complementary sensor strengths while maintaining reliability.

Implementation presents technical challenges including environmental noise filtering, calibration across time and space, and balancing redundant versus complementary sensing. Engineers must also manage tradeoffs in processing power, controllers and system design.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot