AI chatbots struggle with dialect fairness

Researchers are warning that AI chatbots may treat dialect speakers unfairly instead of engaging with them neutrally. Studies across English and German dialects found that large language models often attach negative stereotypes or misunderstand everyday expressions, leading to discriminatory replies.

A study in Germany tested ten language models using dialects such as Bavarian and Kölsch. The systems repeatedly described dialect speakers as uneducated or angry, and the bias became stronger when the dialect was explicitly identified.

Similar findings emerged elsewhere, including UK council services and AI shopping assistants that struggled with African American English.

Experts argue that such patterns risk amplifying social inequality as governments and businesses rely more heavily on AI. One Indian job applicant even saw a chatbot change his surname to reflect a higher caste, showing how linguistic bias can intersect with social hierarchy instead of challenging it.

Developers are now exploring customised AI models trained with local language data so systems can respond accurately without reinforcing stereotypes.

Researchers say bias can be tuned out of AI if handled responsibly, which could help protect dialect speakers rather than marginalise them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Millions watch AI-generated brainrot content on YouTube

Kapwing research reveals that AI-generated ‘slop’ and brainrot videos now dominate a significant portion of YouTube feeds, accounting for 21–33% of the first 500 Shorts seen by new users.

These rapidly produced AI videos aim to grab attention but make it harder for traditional creators to gain visibility. Analysis of top trending channels shows Spain leads in AI slop subscribers with 20.22 million, while South Korea’s channels have amassed 8.45 billion views.

India’s Bandar Apna Dost is the most-viewed AI slop channel, earning an estimated $4.25 million annually and showing the profit potential of mass AI-generated content.

The prevalence of AI slop and brainrot has sparked debates over creativity, ethics, and advertiser confidence. YouTube CEO Neal Mohan calls generative AI transformative, but rising automated videos raise concerns over quality and brand safety.

Researchers warn that repeated exposure to AI-generated content can distort perception and contribute to information overload. Some AI content earns artistic respect, but much normalises low-quality videos, making it harder for users to tell meaningful content from repetitive or misleading material.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

SoftBank invests $4 billion in global AI networks

SoftBank Group has agreed to acquire DigitalBridge for $4 billion, strengthening its global digital infrastructure capabilities. The move aims to scale data centres, connectivity, and edge networks to support next-generation AI services.

The acquisition aligns with SoftBank’s mission to develop Artificial Super Intelligence (ASI), providing the compute power and connectivity needed to deploy AI at scale.

DigitalBridge’s global portfolio of data centres, cell towers, fibre networks, and edge infrastructure will enhance SoftBank’s ability to finance and operate these assets worldwide.

DigitalBridge will continue to operate independently under CEO Marc Ganzi. The transaction, valued at a 15% premium to its closing share price, is expected to close in the second half of 2026, pending regulatory approval.

SoftBank and DigitalBridge anticipate that the combined resources will accelerate investments in AI infrastructure, supporting the rapid growth of technology companies and fostering the development of advanced AI applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China’s AI sector accelerates after breakthrough year

China’s AI industry entered 2025 as a perceived follower but ended the year transformed. Rapid technical progress and commercial milestones reshaped global perceptions of Chinese innovation.

The surprise release of DeepSeek R1 demonstrated strong reasoning performance at unusually low training costs. Open access challenged assumptions about chip dominance and boosted adoption across emerging markets.

State backing and private capital followed quickly, lifting the AI’s sector valuations and supporting embodied intelligence projects. Leading model developers prepared IPO filings, signalling confidence in long term growth.

Chinese firms increasingly prioritised practical deployment, multilingual capability, and service integration. Global expansion now stresses cultural adaptation rather than raw technical benchmarks alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI can mislead on tides and outdoor safety

UK outdoor enthusiasts are warned not to rely solely on AI for tide times or weather. Errors recently stranded visitors on Sully Island, showing the limits of unverified information.

Maritime authorities recommend consulting official sources such as the UK Hydrographic Office and Met Office. AI tools may misread tables or local data, making human oversight essential for safety.

Mountain rescue teams report similar issues when inexperienced walkers used AI to plan trips. Even with good equipment, lack of judgement can turn minor errors into dangerous situations.

Practical experience, professional guidance, and verified data remain critical for safe outdoor activities. Relying on AI alone can create serious risks, especially on tidal beaches and challenging mountain routes.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI transforms Indian filmmaking

Filmmakers in India are rapidly adopting AI tools like ChatGPT, Midjourney and Stable Diffusion to create visuals, clone voices, and streamline production processes for both independent and large-scale films.

Low-budget directors now produce nearly entire films independently, reducing costs and production time. Filmmakers use AI to visualise scenes, experiment creatively, and plan sound and effects efficiently.

AI cannot fully capture cultural nuance, emotional depth, or storytelling intuition, so human oversight remains essential. Intellectual property, labour protections, and ethical issues remain unresolved.

Hollywood has resisted AI, with strikes over rights and labour concerns. Indian filmmakers, however, carefully combine AI tools with human creativity to preserve artistic vision and cultural nuance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI slop dominates YouTube recommendations for new users

More than 20 percent of videos recommended to new YouTube users are low-quality, attention-driven content commonly referred to as AI slop, according to new research. The findings raise concerns about how recommendation systems shape early user experience on the platform.

Video-editing firm Kapwing analysed 15,000 of YouTube’s top channels across countries worldwide. Researchers identified 278 channels consisting entirely of AI-generated slop, designed primarily to maximise views rather than provide substantive content.

These channels have collectively amassed more than 63 billion views and 221 million subscribers. Kapwing estimates the network generates around $117 million in annual revenue through advertising and engagement.

To test recommendations directly, researchers created a new YouTube account and reviewed its first 500 suggested videos. Of these, 104 were classified as AI slop, with around one third falling into a category described as brainrot content.

Kapwing found that AI slop channels attract large audiences globally, including tens of millions of subscribers in countries such as Spain, Egypt, the United States, and Brazil. Researchers said the scale highlights the growing reach of low-quality AI-generated video content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Stronger safeguards arrive with OpenAI’s GPT-5.2 release

OpenAI has launched GPT-5.2, highlighting improved safety performance in conversations involving mental health. The company said the update strengthens how its models respond to signs of suicide, self-harm, emotional distress, and reliance on the chatbot.

The release follows criticism and legal challenges accusing ChatGPT of contributing to psychosis, paranoia, and delusional thinking in some users. Several cases have highlighted the risks of prolonged emotional engagement with AI systems.

In response to a wrongful death lawsuit involving a US teenager, OpenAI denied responsibility while stating that ChatGPT encouraged the user to seek help. The company also committed to improving responses when users display warning signs of mental health crises.

OpenAI said GPT-5.2 produces fewer undesirable responses in sensitive situations than earlier versions. According to the company, the model scores higher on internal safety tests related to self-harm, emotional reliance, and mental health.

The update builds on OpenAI’s use of a training approach known as safe completion, which aims to balance helpfulness and safety. Detailed performance information has been published in the GPT-5.2 system card.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI reshaped European healthcare in 2025

Europe’s healthcare systems turned increasingly to AI in 2025, using new tools to predict disease, speed diagnosis, and reduce administrative workloads.

Countries including Finland, Estonia and Spain adopted AI to train staff, analyse medical data and detect illness earlier, while hospitals introduced AI scribes to free up doctors’ time with patients.

Researchers also advanced AI models able to forecast more than a thousand conditions many years before diagnosis, including heart disease, diabetes and certain cancers.

Further tools detected heart problems in seconds, flagged prostate cancer risks more quickly and monitored patients recovering from stent procedures instead of relying only on manual checks.

Experts warned that AI should support clinicians rather than replace them, as doctors continue to outperform AI in emergency care and chatbots struggle with mental health needs.

Security specialists also cautioned that extremists could try to exploit AI to develop biological threats, prompting calls for stronger safeguards.

Despite such risks, AI-driven approaches are now embedded across European medicine, from combating antibiotic-resistant bacteria to streamlining routine paperwork. Policymakers and health leaders are increasingly focused on how to scale innovation safely instead of simply chasing rapid deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI is changing how Europeans work and learn

Generative AI has become an everyday tool across Europe, with millions using platforms such as ChatGPT, Gemini and Grok for personal, work, and educational purposes. Eurostat data shows that around a third of people aged 16–74 tried AI tools at least once in 2025.

Adoption varies widely across the continent. Norway leads with 56 percent of the population using AI, while Turkey records only 17 percent.

Within the EU, Denmark tops usage at 48 percent, and Romania lags at 18 percent. Northern and digitally advanced countries dominate, while southern, central-eastern, and Balkan nations show lower engagement.

Researchers attribute this to general digital literacy, internet use, and familiarity with technology rather than government policy alone. AI tools are used more for personal purposes than for work.

Across the EU, 25 percent use AI for personal tasks, compared with 15 percent for professional applications.

Usage in education is even lower, with only 9 percent employing AI in formal learning, peaking at 21 percent in Sweden and Switzerland and dropping to just 1 percent in Hungary.

Experts stress that while access is essential, understanding how to apply AI effectively remains a key barrier. Countries with strong digital foundations adopt AI more, while limited awareness and skills restrict use, emphasising the need for AI literacy and infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot