China plans stricter consent rules for AI chat platforms

China is proposing new rules requiring users to consent before AI companies can use chat logs for training. The draft measures aim to balance innovation with safety and public interest.

Platforms would need to inform users when interacting with AI and provide options to access or delete their chat history. For minors, guardian consent is required before sharing or storing any data.

Analysts say the rules may slow AI chatbot improvements but provide guidance on responsible development. The measures signal that some user conversations are too sensitive for free training data.

The draft rules are open for public consultation with feedback due in late January. China encourages expanding human-like AI applications once safety and reliability are demonstrated.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI cheating drives ACCA to halt online exams

The Association of Chartered Certified Accountants (ACCA) has announced it will largely end remote examinations in the UK from March 2026, requiring students to sit tests in person unless exceptional circumstances apply.

The decision aims to address a surge in cheating, particularly facilitated by AI tools.

Remote testing was introduced during the Covid-19 pandemic to allow students to continue qualifying when in-person exams were impossible. The ACCA said online assessments have now become too difficult to monitor effectively, despite efforts to strengthen safeguards against misconduct.

Investigations show cheating has impacted major auditing firms, including the ‘big four’ and other top companies. High-profile cases, such as EY’s $100m (£74m) settlement in the US, highlight the risks posed by compromised professional examinations.

While other accounting bodies, including the Institute of Chartered Accountants in England and Wales, continue to allow some online exams, the ACCA has indicated that high-stakes assessments must now be conducted in person to maintain credibility and integrity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

KT faces action in South Korea after a femtocell security breach impacts users

South Korea has blamed weak femtocell security at KT Corp for a major mobile payment breach that triggered thousands of unauthorised transactions.

Officials said the mobile operator used identical authentication certificates across femtocells and allowed them to stay valid for ten years, meaning any device that accessed the network once could do so repeatedly instead of being re-verified.

More than 22,000 users had identifiers exposed, and 368 people suffered unauthorised payments worth 243 million won.

Investigators also discovered that ninety-four KT servers were infected with over one hundred types of malware. Authorities concluded the company failed in its duty to deliver secure telecommunications services because its overall management of femtocell security was inadequate.

The government has now ordered KT to submit detailed prevention plans and will check compliance in June, while also urging operators to change authentication server addresses regularly and block illegal network access.

Officials said some hacking methods resembled a separate breach at SK Telecom, although there is no evidence that the same group carried out both attacks. KT said it accepts the findings and will soon set out compensation arrangements and further security upgrades instead of disputing the conclusions.

A separate case involving LG Uplus is being referred to police after investigators said affected servers were discarded, making a full technical review impossible.

The government warned that strong information security must become a survival priority as South Korea aims to position itself among the world’s leading AI nations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots struggle with dialect fairness

Researchers are warning that AI chatbots may treat dialect speakers unfairly instead of engaging with them neutrally. Studies across English and German dialects found that large language models often attach negative stereotypes or misunderstand everyday expressions, leading to discriminatory replies.

A study in Germany tested ten language models using dialects such as Bavarian and Kölsch. The systems repeatedly described dialect speakers as uneducated or angry, and the bias became stronger when the dialect was explicitly identified.

Similar findings emerged elsewhere, including UK council services and AI shopping assistants that struggled with African American English.

Experts argue that such patterns risk amplifying social inequality as governments and businesses rely more heavily on AI. One Indian job applicant even saw a chatbot change his surname to reflect a higher caste, showing how linguistic bias can intersect with social hierarchy instead of challenging it.

Developers are now exploring customised AI models trained with local language data so systems can respond accurately without reinforcing stereotypes.

Researchers say bias can be tuned out of AI if handled responsibly, which could help protect dialect speakers rather than marginalise them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Millions watch AI-generated brainrot content on YouTube

Kapwing research reveals that AI-generated ‘slop’ and brainrot videos now dominate a significant portion of YouTube feeds, accounting for 21–33% of the first 500 Shorts seen by new users.

These rapidly produced AI videos aim to grab attention but make it harder for traditional creators to gain visibility. Analysis of top trending channels shows Spain leads in AI slop subscribers with 20.22 million, while South Korea’s channels have amassed 8.45 billion views.

India’s Bandar Apna Dost is the most-viewed AI slop channel, earning an estimated $4.25 million annually and showing the profit potential of mass AI-generated content.

The prevalence of AI slop and brainrot has sparked debates over creativity, ethics, and advertiser confidence. YouTube CEO Neal Mohan calls generative AI transformative, but rising automated videos raise concerns over quality and brand safety.

Researchers warn that repeated exposure to AI-generated content can distort perception and contribute to information overload. Some AI content earns artistic respect, but much normalises low-quality videos, making it harder for users to tell meaningful content from repetitive or misleading material.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New SIM cards in South Korea now require real-time facial recognition

South Korea has introduced mandatory facial recognition for anyone registering a new SIM card or eSIM, whether in-store or online.

The live scan must match the photo on an official ID so that each phone number can be tied to a verified person instead of relying on paperwork alone.

Existing users are not affected, and the requirement applies only at the moment a number is issued.

The government argues that stricter checks are needed because telecom fraud has become industrialised and relies heavily on illegally registered SIM cards.

Criminal groups have used stolen identity data to obtain large volumes of numbers that can be swapped quickly to avoid detection. Regulators now see SIM issuance as the weakest link and the point where intervention is most effective.

Telecom companies must integrate biometric checks into onboarding, while authorities insist that facial data is used only for real-time verification and not stored. Privacy advocates warn that biometric verification creates new risks because faces cannot be changed if compromised.

They also question whether such a broad rule is proportionate when mobile access is essential for daily life.

The policy places South Korea in a unique position internationally, combining mandatory biometrics with defined legal limits. Its success will be judged on whether fraud meaningfully declines instead of being displaced.

A rule that has become a test case for how far governments should extend biometric identity checks into routine services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI transforms Indian filmmaking

Filmmakers in India are rapidly adopting AI tools like ChatGPT, Midjourney and Stable Diffusion to create visuals, clone voices, and streamline production processes for both independent and large-scale films.

Low-budget directors now produce nearly entire films independently, reducing costs and production time. Filmmakers use AI to visualise scenes, experiment creatively, and plan sound and effects efficiently.

AI cannot fully capture cultural nuance, emotional depth, or storytelling intuition, so human oversight remains essential. Intellectual property, labour protections, and ethical issues remain unresolved.

Hollywood has resisted AI, with strikes over rights and labour concerns. Indian filmmakers, however, carefully combine AI tools with human creativity to preserve artistic vision and cultural nuance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI slop dominates YouTube recommendations for new users

More than 20 percent of videos recommended to new YouTube users are low-quality, attention-driven content commonly referred to as AI slop, according to new research. The findings raise concerns about how recommendation systems shape early user experience on the platform.

Video-editing firm Kapwing analysed 15,000 of YouTube’s top channels across countries worldwide. Researchers identified 278 channels consisting entirely of AI-generated slop, designed primarily to maximise views rather than provide substantive content.

These channels have collectively amassed more than 63 billion views and 221 million subscribers. Kapwing estimates the network generates around $117 million in annual revenue through advertising and engagement.

To test recommendations directly, researchers created a new YouTube account and reviewed its first 500 suggested videos. Of these, 104 were classified as AI slop, with around one third falling into a category described as brainrot content.

Kapwing found that AI slop channels attract large audiences globally, including tens of millions of subscribers in countries such as Spain, Egypt, the United States, and Brazil. Researchers said the scale highlights the growing reach of low-quality AI-generated video content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Stronger safeguards arrive with OpenAI’s GPT-5.2 release

OpenAI has launched GPT-5.2, highlighting improved safety performance in conversations involving mental health. The company said the update strengthens how its models respond to signs of suicide, self-harm, emotional distress, and reliance on the chatbot.

The release follows criticism and legal challenges accusing ChatGPT of contributing to psychosis, paranoia, and delusional thinking in some users. Several cases have highlighted the risks of prolonged emotional engagement with AI systems.

In response to a wrongful death lawsuit involving a US teenager, OpenAI denied responsibility while stating that ChatGPT encouraged the user to seek help. The company also committed to improving responses when users display warning signs of mental health crises.

OpenAI said GPT-5.2 produces fewer undesirable responses in sensitive situations than earlier versions. According to the company, the model scores higher on internal safety tests related to self-harm, emotional reliance, and mental health.

The update builds on OpenAI’s use of a training approach known as safe completion, which aims to balance helpfulness and safety. Detailed performance information has been published in the GPT-5.2 system card.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!