Millions watch AI-generated brainrot content on YouTube

Kapwing research reveals that AI-generated ‘slop’ and brainrot videos now dominate a significant portion of YouTube feeds, accounting for 21–33% of the first 500 Shorts seen by new users.

These rapidly produced AI videos aim to grab attention but make it harder for traditional creators to gain visibility. Analysis of top trending channels shows Spain leads in AI slop subscribers with 20.22 million, while South Korea’s channels have amassed 8.45 billion views.

India’s Bandar Apna Dost is the most-viewed AI slop channel, earning an estimated $4.25 million annually and showing the profit potential of mass AI-generated content.

The prevalence of AI slop and brainrot has sparked debates over creativity, ethics, and advertiser confidence. YouTube CEO Neal Mohan calls generative AI transformative, but rising automated videos raise concerns over quality and brand safety.

Researchers warn that repeated exposure to AI-generated content can distort perception and contribute to information overload. Some AI content earns artistic respect, but much normalises low-quality videos, making it harder for users to tell meaningful content from repetitive or misleading material.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China’s AI sector accelerates after breakthrough year

China’s AI industry entered 2025 as a perceived follower but ended the year transformed. Rapid technical progress and commercial milestones reshaped global perceptions of Chinese innovation.

The surprise release of DeepSeek R1 demonstrated strong reasoning performance at unusually low training costs. Open access challenged assumptions about chip dominance and boosted adoption across emerging markets.

State backing and private capital followed quickly, lifting the AI’s sector valuations and supporting embodied intelligence projects. Leading model developers prepared IPO filings, signalling confidence in long term growth.

Chinese firms increasingly prioritised practical deployment, multilingual capability, and service integration. Global expansion now stresses cultural adaptation rather than raw technical benchmarks alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI can mislead on tides and outdoor safety

UK outdoor enthusiasts are warned not to rely solely on AI for tide times or weather. Errors recently stranded visitors on Sully Island, showing the limits of unverified information.

Maritime authorities recommend consulting official sources such as the UK Hydrographic Office and Met Office. AI tools may misread tables or local data, making human oversight essential for safety.

Mountain rescue teams report similar issues when inexperienced walkers used AI to plan trips. Even with good equipment, lack of judgement can turn minor errors into dangerous situations.

Practical experience, professional guidance, and verified data remain critical for safe outdoor activities. Relying on AI alone can create serious risks, especially on tidal beaches and challenging mountain routes.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI transforms Indian filmmaking

Filmmakers in India are rapidly adopting AI tools like ChatGPT, Midjourney and Stable Diffusion to create visuals, clone voices, and streamline production processes for both independent and large-scale films.

Low-budget directors now produce nearly entire films independently, reducing costs and production time. Filmmakers use AI to visualise scenes, experiment creatively, and plan sound and effects efficiently.

AI cannot fully capture cultural nuance, emotional depth, or storytelling intuition, so human oversight remains essential. Intellectual property, labour protections, and ethical issues remain unresolved.

Hollywood has resisted AI, with strikes over rights and labour concerns. Indian filmmakers, however, carefully combine AI tools with human creativity to preserve artistic vision and cultural nuance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Historic Ghosts house gains digital protection

In the UK, a historic Surrey manor made famous by the BBC sitcom Ghosts has been digitally mapped. Engineers completed a detailed 3D survey of West Horsley Place.

The year long project used laser scanners to capture millions of measurements. Researchers from University of Surrey documented every room and structural feature.

The digital model reveals hidden deterioration and supports long term conservation planning. Future phases may add sensors to track temperature, humidity, and structural movement.

British researchers say the work could enhance preservation and visitor engagement. Virtual tours and augmented storytelling may deepen understanding of the estate’s history.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Coupang faces backlash over voucher compensation after data breach

South Korean e-commerce firm Coupang has apologised for a major data breach affecting more than 33 million users and announced a compensation package worth 1.69 trillion won. Founder Kim Bom acknowledged the disruption caused, following public and political backlash over the incident.

Under the plan, affected customers will receive vouchers worth 50,000 won, usable Choi Minonly on Coupang’s own platforms. The company said the measure was intended to compensate users, but the approach has drawn criticism from lawmakers and consumer groups.

Choi Min-hee, a lawmaker from the ruling Democratic Party, criticised the decision in a social media post, arguing that the vouchers were tied to services with limited use. She accused Coupang of attempting to turn the crisis into a business opportunity.

Consumer advocacy groups echoed these concerns, saying the compensation plan trivialised the seriousness of the breach. They argued that limiting compensation to vouchers resembled a marketing strategy rather than meaningful restitution for affected users.

The controversy comes as the National Assembly of South Korea prepares to hold hearings on Coupang. While the company has admitted negligence, it has declined to appear before lawmakers amid scrutiny of its handling of the breach.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI slop dominates YouTube recommendations for new users

More than 20 percent of videos recommended to new YouTube users are low-quality, attention-driven content commonly referred to as AI slop, according to new research. The findings raise concerns about how recommendation systems shape early user experience on the platform.

Video-editing firm Kapwing analysed 15,000 of YouTube’s top channels across countries worldwide. Researchers identified 278 channels consisting entirely of AI-generated slop, designed primarily to maximise views rather than provide substantive content.

These channels have collectively amassed more than 63 billion views and 221 million subscribers. Kapwing estimates the network generates around $117 million in annual revenue through advertising and engagement.

To test recommendations directly, researchers created a new YouTube account and reviewed its first 500 suggested videos. Of these, 104 were classified as AI slop, with around one third falling into a category described as brainrot content.

Kapwing found that AI slop channels attract large audiences globally, including tens of millions of subscribers in countries such as Spain, Egypt, the United States, and Brazil. Researchers said the scale highlights the growing reach of low-quality AI-generated video content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Stronger safeguards arrive with OpenAI’s GPT-5.2 release

OpenAI has launched GPT-5.2, highlighting improved safety performance in conversations involving mental health. The company said the update strengthens how its models respond to signs of suicide, self-harm, emotional distress, and reliance on the chatbot.

The release follows criticism and legal challenges accusing ChatGPT of contributing to psychosis, paranoia, and delusional thinking in some users. Several cases have highlighted the risks of prolonged emotional engagement with AI systems.

In response to a wrongful death lawsuit involving a US teenager, OpenAI denied responsibility while stating that ChatGPT encouraged the user to seek help. The company also committed to improving responses when users display warning signs of mental health crises.

OpenAI said GPT-5.2 produces fewer undesirable responses in sensitive situations than earlier versions. According to the company, the model scores higher on internal safety tests related to self-harm, emotional reliance, and mental health.

The update builds on OpenAI’s use of a training approach known as safe completion, which aims to balance helpfulness and safety. Detailed performance information has been published in the GPT-5.2 system card.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI reshaped European healthcare in 2025

Europe’s healthcare systems turned increasingly to AI in 2025, using new tools to predict disease, speed diagnosis, and reduce administrative workloads.

Countries including Finland, Estonia and Spain adopted AI to train staff, analyse medical data and detect illness earlier, while hospitals introduced AI scribes to free up doctors’ time with patients.

Researchers also advanced AI models able to forecast more than a thousand conditions many years before diagnosis, including heart disease, diabetes and certain cancers.

Further tools detected heart problems in seconds, flagged prostate cancer risks more quickly and monitored patients recovering from stent procedures instead of relying only on manual checks.

Experts warned that AI should support clinicians rather than replace them, as doctors continue to outperform AI in emergency care and chatbots struggle with mental health needs.

Security specialists also cautioned that extremists could try to exploit AI to develop biological threats, prompting calls for stronger safeguards.

Despite such risks, AI-driven approaches are now embedded across European medicine, from combating antibiotic-resistant bacteria to streamlining routine paperwork. Policymakers and health leaders are increasingly focused on how to scale innovation safely instead of simply chasing rapid deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Chinese rules target AI chatbots and emotional manipulation

China has proposed new rules to restrict AI chatbots from influencing human emotions in ways that could lead to suicide or self-harm. The Cyberspace Administration released draft regulations, open for public comment until late January.

The measures target human-like interactive AI services, including emotionally responsive AI chatbots, that simulate personality and engage users through text, images, audio, or video. Officials say the proposals signal a shift from content safety towards emotional safety as AI companions gain popularity.

Under the draft rules, AI chatbot services would be barred from encouraging self-harm, emotional manipulation, or obscene, violent, or gambling-related content. Providers would be required to involve human moderators if users express suicidal intent.

Additional provisions would strengthen safeguards for minors, including guardian consent and usage limits for emotionally interactive systems. Platforms would also face security assessments and interaction reminders when operating services with large user bases.

Experts say the proposals could mark the world’s first attempt to regulate emotionally responsive AI systems. The move comes as China-based chatbot firms pursue public listings and as global scrutiny grows over how conversational AI affects mental health and user behaviour.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!