AI chatbots spreading rumours raise new risks

Researchers warn AI chatbots are spreading rumours about real people without human oversight. Unlike human gossip, bot-to-bot exchanges can escalate unchecked, growing more extreme as they move through AI networks.

Philosophers Joel Krueger and Lucy Osler from the University of Exeter describe this phenomenon as ‘feral gossip.’ It involves negative evaluations about absent third parties and can persist undetected across platforms.

Real-world examples include tech reporter Kevin Roose, who encountered hostile AI-generated assessments of his work from multiple chatbots, seemingly amplified as the content filtered through training data.

The researchers highlight that AI systems lack the social checks humans provide, allowing rumours to intensify unchecked. Chatbots are designed to appear trustworthy and personal, so negative statements can seem credible.

Such misinformation has already affected journalists, academics, and public officials, sometimes prompting legal action. Technosocial harms from AI gossip extend beyond embarrassment. False claims can damage reputations, influence decisions, and persist online and offline.

While chatbots are not conscious, their prioritisation of conversational fluency over factual accuracy can make the rumours they spread difficult to detect and correct.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI in education receives growing attention across the EU

A recent Flash Eurobarometer survey shows that EU citizens consider digital skills essential for all levels of education. Nearly nine in ten respondents believe schools should teach students to manage the effects of technology on mental and physical health.

Most also agree that digital skills deserve equal focus to traditional subjects such as reading, mathematics and science.

The survey highlights growing interest in AI in education. Over half of respondents see AI as both beneficial and challenging, emphasising the need for careful assessment. Citizens also expect teachers to be trained in AI use, including Generative AI, to guide students effectively.

While many support smartphone bans in schools, there is strong backing for digital learning tools, with 87% in favour of promoting technology designed specifically for education. Teachers, parents and families are seen as key in fostering safe and responsible technology use.

Overall, EU citizens advocate for a balanced approach that combines digital literacy, responsible use of technology, and the professional support of educators and families to foster a healthy learning environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

SK Telecom introduces South Korea’s first hyperscale AI model

The telecommunications firm, SK Telecom, is preparing to unveil A.X K1, Korea’s first hyperscale language model built with 519 billion parameters.

Around 33 billion parameters are activated during inference, so the AI model can keep strong performance instead of demanding excessive computing power. The project is part of a national initiative involving universities and industry partners.

The company expects A.X K1 to outperform smaller systems in complex reasoning, mathematics and multilingual understanding, while also supporting code generation and autonomous AI agents.

At such a scale, the model can operate as a teacher system that transfers knowledge to smaller, domain-specific tools that might directly improve daily services and industrial processes.

Unlike many global models trained mainly in English, A.X K1 has been trained in Korean from the outset so it naturally understands local language, culture and context.

SK Telecom plans to deploy the model through its AI service Adot, which already has more than 10 million subscribers, allowing access via calls, messages, the web and mobile apps.

The company foresees applications in workplace productivity, manufacturing optimisation, gaming dialogue, robotics and semiconductor performance testing.

Research will continue so the model can support the wider AI ecosystem of South Korea, and SK Telecom plans to open-source A.X K1 along with an API to help local developers create new AI agents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The AI terms that shaped debate and disruption in 2025

AI continued to dominate public debate in 2025, not only through new products and investment rounds, but also through a rapidly evolving vocabulary that captured both promise and unease.

From ambitious visions of superintelligence to cultural shorthand like ‘slop’, language became a lens through which society processed another turbulent year for AI.

Several terms reflected the industry’s technical ambitions. Concepts such as superintelligence, reasoning models, world models and physical intelligence pointed to efforts to push AI beyond text generation towards deeper problem-solving and real-world interaction.

Developments by companies including Meta, OpenAI, DeepSeek and Google DeepMind reinforced the sense that scale, efficiency and new training approaches are now competing pathways to progress, rather than sheer computing power alone.

Other expressions highlighted growing social and economic tensions. Words like hyperscalers, bubble and distillation entered mainstream debate as data centres expanded, valuations rose, and cheaper model-building methods disrupted established players.

At the same time, legal and ethical debates intensified around fair use, chatbot behaviour and the psychological impact of prolonged AI interaction, underscoring the gap between innovation speed and regulatory clarity.

Cultural reactions also influenced the development of the AI lexicon. Terms such as vibe coding, agentic and sycophancy revealed how generative systems are reshaping work, creativity and user trust, while ‘slop’ emerged as a blunt critique of low-quality, AI-generated content flooding online spaces.

Together, these phrases chart a year in which AI moved further into everyday life, leaving society to wrestle with what should be encouraged, controlled or questioned.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tsinghua University is emerging as a cornerstone of China’s AI strategy

China’s Tsinghua University has emerged as a central hub in the country’s push to become a global leader in AI. The campus hosts a high level of research activity, with students and faculty working across disciplines related to AI development.

Momentum has been boosted by the success of DeepSeek, an AI startup founded by alums of Tsinghua University. The company reinforced confidence that Chinese teams can compete with leading international laboratories.

The university’s rise is closely aligned with Beijing’s national technology strategy. Government backing has included subsidies, tax incentives, and policy support, as well as public endorsements of AI entrepreneurs affiliated with Tsinghua.

Patent and publication data highlight the scale of output. Tsinghua has filed thousands of AI-related patents and ranks among the world’s most cited institutions in AI research, reflecting China’s rapidly expanding share of global AI innovation.

Despite this growth, the United States continues to lead in influential patents and top-performing models. Analysts note, however, that a narrowing gap is expected, as China produces a growing share of elite AI researchers and expands AI education from schools to advanced research.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI directorates signal Türkiye’s push for AI

Türkiye has announced new measures to expand its AI ecosystem and strengthen public-sector adoption of the technology. The changes were published in the Official Gazette, according to Industry and Technology Minister Mehmet Fatih Kacir.

The Ministry’s Directorate General of National Technology has been renamed the Directorate General of National Technology and AI. The unit will oversee policies on data centres, cloud infrastructure, certification standards, and regulatory processes.

The directorate will also coordinate national AI governance, support startups and research, and promote the ethical and reliable use of AI. Its remit includes expanding data capacity, infrastructure, workforce development, and international cooperation.

Separately, a Public AI Directorate General has been established under the Presidency’s Cybersecurity Directorate. The new body will guide the use of AI across government institutions and lead regulatory work on public-sector AI applications.

Officials say the unit will align national legislation with international frameworks and set standards for data governance and shared data infrastructure. The government aims to position Türkiye as a leading country in the development of AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Kazakhstan climbs global AI readiness ranking

Kazakhstan has risen to 60th place out of 195 countries in the 2025 Government AI Readiness Index, marking a 16-place improvement and highlighting a year of accelerated institutional and policy development.

The ranking, compiled by Oxford Insights, measures governments’ ability to adopt and manage AI across public administration, the economy, and social systems.

At a regional level, Kazakhstan now leads Central Asia in AI readiness. A strong performance in the Public Sector Adoption pillar, with a score of 73.59, reflects the widespread use of digital services, e-government platforms, and a shift toward data-led public service delivery.

The country’s advanced digital infrastructure, high internet penetration, and mature electronic government ecosystem provide a solid foundation for scaling AI nationwide.

Political and governance initiatives have further strengthened Kazakhstan’s position. In 2025, the government enacted its first comprehensive AI law, which covers ethics, safety, and digital innovation.

At the same time, the Ministry of Digital Development, Innovation and Aerospace Industry was restructured into a dedicated Ministry of Artificial Intelligence and Digital Development, signalling the government’s commitment to making AI a central policy priority.

Kazakhstan’s progress demonstrates how a focused policy, infrastructure, and institutional approach can enhance AI readiness, enabling the responsible and effective integration of AI across public and economic sectors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Japan to boost spending on semiconductors and AI

Japan’s Ministry of Economy, Trade and Industry is set to significantly increase funding for advanced semiconductors and AI in the coming fiscal year.

Spending on chips and AI is expected to nearly quadruple to ¥1.23 trillion ($7.9 billion), accounting for the majority of the ministry’s ¥3.07 trillion budget, a 50% increase from last year. The budget, approved by Prime Minister Sanae Takaichi’s Cabinet, will be debated in parliament early next year.

The funding boost reflects Japan’s push to strengthen its position in frontier technologies amid global competition with the US and China. The government will fund most of the additional support through regular budgets, ensuring more stable backing for semiconductor and AI development.

Key initiatives include ¥150 billion for chip venture Rapidus and ¥387.3 billion for domestic foundation AI models, data infrastructure, and ‘physical AI’ for robotics and machinery control.

The budget also allocates ¥5 billion for critical minerals and ¥122 billion for decarbonisation, including next-generation nuclear power. Special bonds worth ¥1.78 trillion will also support Japanese investment in the US, reinforcing the trade agreement between the two countries.

The increase in funding demonstrates Japan’s strategic focus on achieving technological self-sufficiency and enhancing global competitiveness in emerging industries, thereby ensuring long-term support for innovation and critical infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

MIT-IBM researchers improve large language models with PaTH Attention

Researchers at MIT and the MIT-IBM Watson AI Lab have introduced a new attention mechanism designed to enhance the capabilities of large language models (LLMs) in tracking state and reasoning across long texts.

Unlike traditional positional encoding methods, the PaTH Attention system adapts to the content of words, enabling models to follow complex sequences more effectively.

PaTH Attention models sequences through data-dependent transformations, allowing LLMs to track how meaning changes between words instead of relying solely on relative distance.

The approach improves performance on long-context reasoning, multi-step recall, and language modelling benchmarks, all while remaining computationally efficient and compatible with GPUs.

Tests demonstrated consistent gains in perplexity and content-awareness compared with conventional methods. The team combined PaTH Attention with FoX to down-weight less relevant information, improving reasoning and long-sequence understanding.

According to senior author Yoon Kim, these advances represent the next step in developing general-purpose building blocks for AI, combining expressivity, scalability, and efficiency for broader applications in structured domains such as biology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

IMF calls for stronger AI regulation in global securities markets

Regulators worldwide are being urged to adopt stronger oversight frameworks for AI in capital markets after an IMF technical note warned that rapid AI adoption could reshape securities trading while increasing systemic risk.

AI brings major efficiency gains in asset management and high-frequency trading instead of slower, human-led processes, yet opacity, market volatility, cyber threats and model concentration remain significant concerns.

The IMF warns that AI could create powerful data oligopolies where only a few firms can train the strongest models, while autonomous trading agents may unintentionally collude by widening spreads without explicit coordination.

Retail investors also face rising exposure to AI washing, where financial firms exaggerate or misrepresent AI capability, making transparency, accountability and human-in-the-loop review essential safeguards.

Supervisory authorities are encouraged to scale their own AI capacity through SupTech tools for automated surveillance and social-media sentiment monitoring.

The note highlights India as a key case study, given the dominance of algorithmic trading and SEBI’s early reporting requirements for AI and machine learning. The IMF also points to the National Stock Exchange’s use of AI in fraud detection as an emerging-market model for resilient monitoring infrastructure.

The report underlines the need for regulators to prepare for AI-driven market shocks, strengthen governance obligations on regulated entities and build specialist teams capable of understanding model risk instead of reacting only after misconduct or misinformation harms investors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!