Credit reporting breach exposes 5.6 millions consumers through third party API

US credit reporting company 700Credit has confirmed a data breach affecting more than 5.6 million individuals after attackers exploited a compromised third-party API used to exchange consumer data with external integration partners.

An incident that originated from a supply chain failure after one partner was breached earlier in 2025 and failed to notify 700Credit.

The attackers launched a sustained, high-volume data extraction campaign starting on October 25, 2025, which operated for more than two weeks before access was shut down.

Around 20 percent of consumer records were accessed, exposing names, home addresses, dates of birth and Social Security numbers, while internal systems, payment platforms and login credentials were not compromised.

Despite the absence of financial system access, the exposed personal data significantly increases the risk of identity theft and sophisticated phishing attacks impersonating credit reporting services.

The breach has been reported to the Federal Trade Commission and the FBI, with regulators coordinating responses through industry bodies representing affected dealerships.

Individuals impacted by the incident are currently being notified and offered two years of free credit monitoring, complimentary credit reports and access to a dedicated support line.

Authorities have urged recipients to act promptly by monitoring their credit activity and taking protective measures to minimise the risk of fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

No sensitive data compromised in SoundCloud incident

SoundCloud has confirmed a recent security incident that temporarily affected platform availability and involved the limited exposure of user data. The company detected unauthorised activity on an ancillary service dashboard and acted immediately to contain the situation.

Third-party cybersecurity experts were engaged to investigate and support the response. The incident resulted in two brief denial-of-service attacks, temporarily disrupting web access.

Approximately 20% of users were affected; however, no sensitive data, such as passwords or financial details, were compromised. Only email addresses and publicly visible profile information were involved.

In response, SoundCloud has strengthened its systems, enhancing monitoring, reviewing identity and access controls, and auditing related systems. Some configuration updates have led to temporary VPN connectivity issues, which the company is working to resolve.

SoundCloud emphasises that user privacy remains a top priority and encourages vigilance against phishing. The platform will continue to provide updates and take steps to minimise the risk of future incidents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI content flood drives ‘slop’ to word of the year

Merriam-Webster has chosen ‘slop’ as its 2025 word of the year, reflecting the rise of low-quality digital content produced by AI. The term originally meant soft mud, but now describes absurd or fake online material.

Greg Barlow, Merriam-Webster’s president, said the word captures how AI-generated content has fascinated, annoyed and sometimes alarmed people. Tools like AI video generators can produce deepfakes and manipulated clips in seconds.

The spike in searches for ‘slop’ shows growing public awareness of poor-quality content and a desire for authenticity. People want real, genuine material rather than AI-driven junk content.

AI-generated slop includes everything from absurd videos to fake news and junky digital books. Merriam-Webster selects its word of the year by analysing search trends and cultural relevance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI generated podcasts flood platforms and disrupt the audio industry

Podcasts generated by AI are rapidly reshaping the audio industry, with automated shows flooding platforms such as Spotify, Apple Podcasts and YouTube.

Advances in voice cloning and speech synthesis have enabled the production to large volumes of content at minimal cost, allowing AI hosts to compete directly with human creators in an already crowded market.

Some established podcasters are experimenting cautiously, using cloned voices for translation, post-production edits or emergency replacements. Others have embraced full automation, launching synthetic personalities designed to deliver commentary, biographies and niche updates at speed.

Studios, such as Los Angeles-based Inception Point AI, have scaled the model to scale, producing hundreds of thousands of episodes by targeting micro-audiences and trending searches instead of premium advertising slots.

The rapid expansion is fuelling concern across the industry, where trust and human connection remain central to listener loyalty.

Researchers and networks warn that large-scale automation risks devaluing premium content, while creators and audiences question how far AI voices can replace authenticity without undermining the medium itself.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US TikTok investors face uncertainty as sale delayed again

Investors keen to buy TikTok’s US operations say they are left waiting as the sale is delayed again. ByteDance, TikTok’s Chinese owner, was required to sell or be blocked under a 2024 law.

US President Donald Trump seems set to extend the deadline for a fifth time. Billionaires, including Frank McCourt, Alexis Ohanian and Kevin O’Leary, are awaiting approval.

Investor McCourt confirmed his group has raised the necessary capital and is prepared to move forward once the sale is allowed. National security concerns remain the main reason for the ongoing delays.

Project Liberty, led by McCourt, plans to operate TikTok without Chinese technology, including the recommendation algorithm. The group has developed alternative systems to run the platform independently.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study warns that LLMs are vulnerable to minimal tampering

Researchers from Anthropic, the UK AI Security Institute and the Alan Turing Institute have shown that only a few hundred crafted samples can poison LLM models. The tests revealed that around 250 malicious entries could embed a backdoor that triggers gibberish responses when a specific phrase appears.

Models ranging from 600 million to 13 billion parameters (such as Pythia) were affected, highlighting the scale-independent nature of the weakness. A planted phrase such as ‘sudo’ caused output collapse, raising concerns about targeted disruption and the ease of manipulating widely trained systems.

Security specialists note that denial-of-service effects are worrying, yet deceptive outputs pose far greater risk. Prior studies already demonstrated that medical and safety-critical models can be destabilised by tiny quantities of misleading data, heightening the urgency for robust dataset controls.

Researchers warn that open ecosystems and scraped corpora make silent data poisoning increasingly feasible. Developers are urged to adopt stronger provenance checks and continuous auditing, as reliance on LLMs continues to expand for AI purposes across technical and everyday applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Conduit revolutionises neuro-language research with 10,000-hour dataset

A San Francisco start-up, named Conduit, has spent six months building what it claims is the largest neural language dataset ever assembled, capturing around 10,000 hours of non-invasive brain recordings from thousands of participants.

The project aims to train thought-to-text AI systems that interpret semantic intent from brain activity moments before speech or typing occurs.

Participants take part in extended conversational sessions instead of rigid laboratory tasks, interacting freely with large language models through speech or simplified keyboards.

Engineers found that natural dialogue produced higher quality data, allowing tighter alignment between neural signals, audio and text while increasing overall language output per session.

Conduit developed its own sensing hardware after finding no commercial system capable of supporting large-scale multimodal recording.

Custom headsets combine multiple neural sensing techniques within dense training rigs, while future inference devices will be simplified once model behaviour becomes clearer.

Power systems and data pipelines were repeatedly redesigned to balance signal clarity with scalability, leading to improved generalisation across users and environments.

As data volume increased, operational costs fell through automation and real time quality control, allowing continuous collection across long daily schedules.

With data gathering largely complete, the focus has shifted toward model training, raising new questions about the future of neural interfaces, AI-mediated communication and cognitive privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

BBVA deepens AI partnership with OpenAI

OpenAI and BBVA have agreed on a multi-year strategic collaboration designed to embed artificial intelligence across the global banking group.

An initiative that will expand the use of ChatGPT Enterprise to all 120,000 BBVA employees, marking one of the largest enterprise deployments of generative AI in the financial sector.

The programme focuses on transforming customer interactions, internal workflows and decision making.

BBVA plans to co-develop AI-driven solutions with OpenAI to support bankers, streamline risk analysis and redesign processes such as software development and productivity support, instead of relying on fragmented digital tools.

The rollout follows earlier deployments that demonstrated strong engagement and measurable efficiency gains, with employees saving hours each week on routine tasks.

ChatGPT Enterprise will be implemented with enterprise grade security and privacy safeguards, ensuring compliance within a highly regulated environment.

Beyond internal operations, BBVA is accelerating its shift toward AI native banking by expanding customer facing services powered by OpenAI models.

The collaboration reflects a broader move among major financial institutions to integrate AI at the core of products, operations and personalised banking experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reshapes cybercrime investigations in India

Maharashtra police are expanding the use of an AI-powered investigation platform developed with Microsoft to tackle the rapid growth of cybercrime.

MahaCrimeOS AI, already in use across Nagpur district, will now be deployed to more than 1,100 police stations statewide, significantly accelerating case handling and investigation workflows.

The system acts as an investigation copilot, automating complaint intake, evidence extraction and legal documentation across multiple languages.

Officers can analyse transaction trails, request data from banks and telecom providers and follow standardised investigation pathways, instead of relying on slow manual processes.

Built using Microsoft Foundry and Azure OpenAI Service, MahaCrimeOS AI integrates policing protocols, criminal law references and open-source intelligence.

Investigators report major efficiency gains, handling several cases monthly where only one was previously possible, while maintaining procedural accuracy and accountability.

The initiative highlights how responsible AI deployment can strengthen public institutions.

By reducing administrative burden and improving investigative capacity, the platform allows officers to focus on victim support and crime resolution, marking a broader shift toward AI-assisted governance in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New law requires AI disclosure in advertising in the US

A new law in New York, US, will require advertisers to disclose when AI-generated people appear in commercial content. Governor Kathy Hochul said the measure brings transparency and protects consumers as synthetic avatars become more widespread.

A second law now requires consent from heirs or executors when using a deceased person’s likeness for commercial purposes. The rule updates the state’s publicity rights, which previously lacked clarity in the context of the generative AI era.

Industry groups welcomed the move, saying it addresses the risks posed by unregulated AI usage, particularly for actors in the film and television industries. The disclosure must be conspicuous when an avatar does not correspond to a real human.

Specific expressive works such as films, games and shows are exempt when the avatar matches its use in the work. The laws arrive as national debate intensifies and President-elect Donald Trump signals potential attempts to limit state-level AI regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!