AI-driven Christmas scams surge online

Cybersecurity researchers are urging greater caution as Christmas approaches, warning that seasonal scams are multiplying rapidly. Check Point has recorded over 33,500 festive phishing emails and more than 10,000 deceptive social ads within two weeks.

AI tools are helping criminals craft convincing messages that mirror trusted brands and local languages. Attackers are also deploying fake e-commerce sites with AI chatbots, as well as deepfake audio and scripted calls to strengthen vishing attempts.

Smishing alerts imitating delivery firms are becoming more widespread, with recent months showing a marked rise in fraudulent parcel scams. Victims are often tricked into sharing payment details through links that imitate genuine logistics updates.

Experts say fake shops and giveaway scams remain persistent risks, frequently launched from accounts created within the past three months. Users are being advised to ignore unsolicited links, verify retailers and treat unexpected offers with scepticism.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Credit reporting breach exposes 5.6 millions consumers through third party API

US credit reporting company 700Credit has confirmed a data breach affecting more than 5.6 million individuals after attackers exploited a compromised third-party API used to exchange consumer data with external integration partners.

An incident that originated from a supply chain failure after one partner was breached earlier in 2025 and failed to notify 700Credit.

The attackers launched a sustained, high-volume data extraction campaign starting on October 25, 2025, which operated for more than two weeks before access was shut down.

Around 20 percent of consumer records were accessed, exposing names, home addresses, dates of birth and Social Security numbers, while internal systems, payment platforms and login credentials were not compromised.

Despite the absence of financial system access, the exposed personal data significantly increases the risk of identity theft and sophisticated phishing attacks impersonating credit reporting services.

The breach has been reported to the Federal Trade Commission and the FBI, with regulators coordinating responses through industry bodies representing affected dealerships.

Individuals impacted by the incident are currently being notified and offered two years of free credit monitoring, complimentary credit reports and access to a dedicated support line.

Authorities have urged recipients to act promptly by monitoring their credit activity and taking protective measures to minimise the risk of fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

No sensitive data compromised in SoundCloud incident

SoundCloud has confirmed a recent security incident that temporarily affected platform availability and involved the limited exposure of user data. The company detected unauthorised activity on an ancillary service dashboard and acted immediately to contain the situation.

Third-party cybersecurity experts were engaged to investigate and support the response. The incident resulted in two brief denial-of-service attacks, temporarily disrupting web access.

Approximately 20% of users were affected; however, no sensitive data, such as passwords or financial details, were compromised. Only email addresses and publicly visible profile information were involved.

In response, SoundCloud has strengthened its systems, enhancing monitoring, reviewing identity and access controls, and auditing related systems. Some configuration updates have led to temporary VPN connectivity issues, which the company is working to resolve.

SoundCloud emphasises that user privacy remains a top priority and encourages vigilance against phishing. The platform will continue to provide updates and take steps to minimise the risk of future incidents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI reshapes cybercrime investigations in India

Maharashtra police are expanding the use of an AI-powered investigation platform developed with Microsoft to tackle the rapid growth of cybercrime.

MahaCrimeOS AI, already in use across Nagpur district, will now be deployed to more than 1,100 police stations statewide, significantly accelerating case handling and investigation workflows.

The system acts as an investigation copilot, automating complaint intake, evidence extraction and legal documentation across multiple languages.

Officers can analyse transaction trails, request data from banks and telecom providers and follow standardised investigation pathways, instead of relying on slow manual processes.

Built using Microsoft Foundry and Azure OpenAI Service, MahaCrimeOS AI integrates policing protocols, criminal law references and open-source intelligence.

Investigators report major efficiency gains, handling several cases monthly where only one was previously possible, while maintaining procedural accuracy and accountability.

The initiative highlights how responsible AI deployment can strengthen public institutions.

By reducing administrative burden and improving investigative capacity, the platform allows officers to focus on victim support and crime resolution, marking a broader shift toward AI-assisted governance in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI outlines safeguards as AI cyber capabilities advance

Cyber capabilities in advanced AI models are improving rapidly, delivering clear benefits for cyberdefence while introducing new dual-use risks that require careful management, according to OpenAI’s latest assessment.

The company points to sharp gains in capture-the-flag performance, with success rates rising from 27 percent in August to 76 percent by November 2025. OpenAI says future models could reach high cyber capability, including assistance with sophisticated intrusion techniques.

To address this, OpenAI says it is prioritising defensive use cases, investing in tools that help security teams audit code, patch vulnerabilities, and respond more effectively to threats. The goal is to give defenders an advantage in an often under-resourced environment.

OpenAI argues that cybersecurity cannot be governed through a single safeguard, as defensive and offensive techniques overlap. Instead, it applies a defence-in-depth approach that combines access controls, monitoring, detection systems, and extensive red teaming to limit misuse.

Alongside these measures, the company plans new initiatives, including trusted access programmes for defenders, agent-based security tools in private testing, and the creation of a Frontier Risk Council. OpenAI says these efforts reflect a long-term commitment to cyber resilience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNODC and INTERPOL announce Global Fraud Summit in 2026

The United Nations Office on Drugs and Crime (UNODC), in cooperation with the International Criminal Police Organization (INTERPOL), will convene the Global Fraud Summit 2026 at the Vienna International Centre, Austria, from 16 to 17 March 2026.

UNODC and INTERPOL invite applications for participation from private sector entities, civil society organisations, and academic institutions. Applications must be submitted by 12 December 2025.

The Summit will provide a platform for discussion on current trends, risks, and responses related to fraud, including its digital and cross-border dimensions. Discussions will address challenges associated with detection, investigation, prevention, and international cooperation in fraud-related cases.

The objectives of the Summit include:

  • Facilitating coordination among national and international stakeholders
  • Supporting information exchange across sectors and jurisdictions
  • Sharing policy, operational, and technical approaches to fraud prevention and response
  • Identifying areas for further cooperation and capacity-building

The ministerial-level meeting will bring together senior representatives from governments, international and regional organisations, law enforcement authorities, the private sector, academia, and civil society. Participating institutions are encouraged to nominate delegates at an appropriate senior level.

The Summit is supported by a financial contribution from the Government of the United Kingdom of Great Britain and Northern Ireland.

Applications must be submitted through the application at the official website.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

International Criminal Court (ICC) issues policy on cyber-enabled crimes

The Office of the Prosecutor (OTP) of the International Criminal Court (ICC) has issued a Policy on Cyber-Enabled Crimes under the Rome Statute. The Policy sets out how the OTP interprets and applies the existing ICC legal framework to conduct that is committed or facilitated through digital and cyber means.

The Policy clarifies that the ICC’s jurisdiction remains limited to crimes defined in the Rome Statute: genocide, crimes against humanity, war crimes, the crime of aggression, and offences against the administration of justice. It does not extend to ordinary cybercrimes under domestic law, such as hacking, fraud, or identity theft, unless such conduct forms part of or facilitates one of the crimes within the Court’s jurisdiction.

According to the Policy, the Rome Statute is technology-neutral. This means that the legal assessment of conduct depends on whether the elements of a crime are met, rather than on the specific tools or technologies used.

As a result, cyber means may be relevant both to the commission of Rome Statute crimes and to the collection and assessment of evidence related to them.

The Policy outlines how cyber-enabled conduct may relate to each category of crimes under the Rome Statute. Examples include cyber operations affecting essential civilian services, the use of digital platforms to incite or coordinate violence, cyber activities causing indiscriminate effects in armed conflict, cyber operations linked to inter-State uses of force, and digital interference with evidence, witnesses, or judicial proceedings before the ICC.

The Policy was developed through consultations with internal and external legal and technical experts, including the OTP’s Special Adviser on Cyber-Enabled Crimes, Professor Marko Milanović. It does not modify or expand the ICC’s jurisdiction, which remains governed exclusively by the Rome Statute.

Currently, there are no publicly known ICC cases focused specifically on cyber-enabled crimes. However, the issuance of the Policy reflects the OTP’s assessment that digital conduct may increasingly be relevant to the commission, facilitation, and proof of crimes within the Court’s mandate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Online data exposure heightens threats to healthcare workers

Healthcare workers are facing escalating levels of workplace violence, with more than three-quarters reporting verbal or physical assaults, prompting hospitals to reassess how they protect staff from both on-site and external threats.

A new study examining people search sites suggests that online exposure of personal information may worsen these risks. Researchers analysed the digital footprint of hundreds of senior medical professionals, finding widespread availability of sensitive personal data.

The study shows that many doctors appear across multiple data broker platforms, with a significant share listed on five or more sites, making it difficult to track, manage, or remove personal information once it enters the public domain.

Exposure varies by age and geography. Younger doctors tend to have smaller digital footprints, while older professionals are more exposed due to accumulated public records. State-level transparency laws also appear to influence how widely data is shared.

Researchers warn that detailed profiles, often available for a small fee, can enable harassment or stalking at a time when threats against healthcare leaders are rising. The findings renew calls for stronger privacy protections for medical staff.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US War Department unveils AI-powered GenAI.mil for all personnel

The War Department has formally launched GenAI.mil, a bespoke generative AI platform powered initially by Gemini for Government, making frontier AI capabilities available to its approximately three million military, civilian, and contractor staff.

According to the department’s announcement, GenAI.mil supports so-called ‘intelligent agentic workflows’: users can summarise documents, generate risk assessments, draft policy or compliance material, analyse imagery or video, and automate routine tasks, all on a secure, IL5-certified platform designed for Controlled Unclassified Information (CUI).

The rollout, described as part of a broader push to cultivate an ‘AI-first’ workforce, follows a July directive from the administration calling for the United States to achieve ‘unprecedented levels of AI technological superiority.’

Department leaders said the platform marks a significant shift in how the US military operates, embedding AI into daily workflows and positioning AI as a force multiplier.

Access is limited to users with a valid DoW common-access card, and the service is currently restricted to non-classified work. The department also says the first rollout is just the beginning; additional AI models from other providers will be added later.

From a tech-governance and defence-policy perspective, this represents one of the most sweeping deployments of generative AI in a national security organisation to date.

It raises critical questions about security, oversight and the balance between efficiency and risk, especially if future iterations expand into classified or operational planning contexts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New spyware threat alerts issued by Apple and Google

Apple and Google have issued a fresh round of cyber threat notifications, warning users worldwide they may have been targeted by sophisticated surveillance operations linked to state-backed actors.

Apple said it sent alerts on 2 December, confirming it has now notified users in more than 150 countries, though it declined to disclose how many people were affected or who was responsible.

Google followed on 3 December, announcing warnings for several hundred accounts targeted by Intellexa spyware across multiple countries in Africa, Central Asia, and the Middle East.

The Alphabet-owned company said Intellexa continues to evade restrictions despite US sanctions, highlighting persistent challenges in limiting the spread of commercial surveillance tools.

Researchers say such alerts raise costs for cyber spies by exposing victims, often triggering investigations that can lead to public scrutiny and accountability over spyware misuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!