US companies are increasingly adopting Chinese AI models as part of their core technology stacks, raising questions about global leadership in AI. In the US, Pinterest has confirmed it is using Chinese-developed models to improve recommendations and shopping features.
In the US, executives point to open-source Chinese models such as DeepSeek and tools from Alibaba as faster, cheaper and easier to customise. US firms say these models can outperform proprietary alternatives at a fraction of the cost.
Adoption extends beyond Pinterest in the US, with Airbnb also relying on Chinese AI to power customer service tools. Data from Hugging Face shows Chinese models frequently rank among the most downloaded worldwide, including across US developers.
Researchers at Stanford University have found Chinese AI capabilities now match or exceed global peers. In the US, firms such as OpenAI and Meta remain focused on proprietary systems, leaving China to dominate open-source AI development.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI has dominated discussions at the World Economic Forum in Davos, where IMF managing director Kristalina Georgieva warned that labour markets are already undergoing rapid structural disruption.
According to Georgieva, demand for skills is shifting unevenly, with productivity gains benefiting some workers while younger people and first-time job seekers face shrinking opportunities.
Entry-level roles are particularly exposed as AI systems absorb routine and clerical tasks traditionally used to gain workplace experience.
Georgieva described the effect on young workers as comparable to a labour-market tsunami, arguing that reduced access to foundational roles risks long-term scarring for an entire generation entering employment.
IMF research suggests AI could affect roughly 60 percent of jobs in advanced economies and 40 percent globally, with only about half of exposed workers likely to benefit.
For others, automation may lead to lower wages, slower hiring and intensified pressure on middle-income roles lacking AI-driven productivity gains.
At Davos 2026, Georgieva warned that the rapid, unregulated deployment of AI in advanced economies risks outpacing public policy responses.
Without clear guardrails and inclusive labour strategies, she argued that technological acceleration could deepen inequality rather than supporting broad-based economic resilience.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
UN agencies have issued a stark warning over the accelerating risks AI poses to children online, citing rising cases of grooming, deepfakes, cyberbullying and sexual extortion.
A joint statement published on 19 January urges urgent global action, highlighting how AI tools increasingly enable predators to target vulnerable children with unprecedented precision.
Recent data underscores the scale of the threat, with technology-facilitated child abuse cases in the US surging from 4,700 in 2023 to more than 67,000 in 2024.
During the COVID-19 pandemic, online exploitation intensified, particularly affecting girls and young women, with digital abuse frequently translating into real-world harm, according to officials from the International Telecommunication Union.
Governments are tightening policies, led by Australia’s social media ban for under-16s, as the UK, France and Canada consider similar measures. UN agencies urged tech firms to prioritise child safety and called for stronger AI literacy across society.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI’s latest GPT-5.2 model has sparked concern after repeatedly citing Grokipedia, an AI-generated encyclopaedia launched by Elon Musk’s xAI, raising fresh fears of misinformation amplification.
Testing by The Guardian showed the model referencing Grokipedia multiple times when answering questions on geopolitics and historical figures.
Launched in October 2025, the AI-generated platform rivals Wikipedia but relies solely on automated content without human editing. Critics warn that limited human oversight raises risks of factual errors and ideological bias, as Grokipedia faces criticism for promoting controversial narratives.
OpenAI said its systems use safety filters and diverse public sources, while xAI dismissed the concerns as media distortion. The episode deepens scrutiny of AI-generated knowledge platforms amid growing regulatory and public pressure for transparency and accountability.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers and free-speech advocates are warning that coordinated swarms of AI agents could soon be deployed to manipulate public opinion at a scale capable of undermining democratic systems.
According to a consortium of academics from leading universities, advances in generative and agentic AI now enable large numbers of human-like bots to infiltrate online communities and autonomously simulate organic political discourse.
Unlike earlier forms of automated misinformation, AI swarms are designed to adapt to social dynamics, learn community norms and exchange information in pursuit of a shared objective.
By mimicking human behaviour and spreading tailored narratives gradually, such systems could fabricate consensus, amplify doubt around electoral processes and normalise anti-democratic outcomes without triggering immediate detection.
Evidence of early influence operations has already emerged in recent elections across Asia, where AI-driven accounts have engaged users with large volumes of unverifiable information rather than overt propaganda.
Researchers warn that information overload, strategic neutrality and algorithmic amplification may prove more effective than traditional disinformation campaigns.
The authors argue that democratic resilience now depends on global coordination, combining technical safeguards such as watermarking and detection tools with stronger governance of political AI use.
Without collective action, they caution that AI-enabled manipulation risks outpacing existing regulatory and institutional defences.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Korea has moved towards regulatory action against Grok, the generative AI chatbot developed by xAI, following allegations that the system was used to generate and distribute sexually exploitative deepfake images.
The country’s Personal Information Protection Commission has launched a preliminary fact-finding review to assess whether violations occurred and whether the matter falls within its legal remit.
The review follows international reports accusing Grok of facilitating the creation of explicit and non-consensual images of real individuals, including minors.
Under the Personal Information Protection Act of South Korea, generating or altering sexual images of identifiable people without consent may constitute unlawful handling of personal data, exposing providers to enforcement action.
Concerns have intensified after civil society groups estimated that millions of explicit images were produced through Grok over a short period, with thousands involving children.
Several governments, including those in the US, Europe and Canada, have opened inquiries, while parts of Southeast Asia have opted to block access to the service altogether.
In response, xAI has introduced technical restrictions preventing users from generating or editing images of real people. Korean regulators have also demanded stronger youth protection measures from X, warning that failure to address criminal content involving minors could result in administrative penalties.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Telangana government has launched Aikam, a new autonomous body aimed at positioning the state as a global proving ground for large-scale AI deployment. Unveiled at the World Economic Forum annual meeting in Davos, the initiative is designed to consolidate state-led AI efforts and support the development, testing, and scale rollout of AI solutions.
State leaders framed the initiative as a shift away from pilot projects towards execution-focused implementation, emphasising transparency, governance, and public trust. The platform is designed to operate with agility while remaining anchored within government structures, reflecting Telangana’s ambition to rank among the world’s top 20 AI innovation hubs.
Aikam will focus on ecosystem building, including mass upskilling to create an AI-ready workforce, supporting AI startups, and strengthening collaboration among academia, research institutions, industry, and government. The state will back these efforts with access to large public datasets, enhanced computing infrastructure, and a dedicated AI Fund-of-Funds to help translate ideas into deployable solutions.
Alongside Aikam, Telangana launched the Responsible AI Standard and Ethics (RAISE) Index, a framework to measure responsible AI practices across the full AI lifecycle. Several international partnerships were also announced, covering skilling, applied research, healthcare, computing, and design, reinforcing the state’s emphasis on globally collaborative and responsible AI deployment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Indonesia is promoting blended finance as a key mechanism to meet the growing investment needs of AI and digital infrastructure. By combining public and private funding, the government aims to accelerate the development of scalable digital systems while aligning investments with sustainability goals and local capacity-building.
The rapid global expansion of AI is driving a sharp rise in demand for computing power and data centres. The government views this trend as both a strategic economic opportunity and a challenge that requires sound financial governance and well-designed policies to ensure long-term national benefits.
International financial institutions and global investors are increasingly supportive of public–private financing models. Such partnerships are seen as essential for mobilising large-scale, long-term capital and supporting the sustainable development of AI-related infrastructure in developing economies.
To attract sustained investment, the government is improving the overall investment climate through regulatory simplification, licensing reforms, integration of the Online Single Submission system, and incentives such as tax allowances and tax holidays. These measures are intended to support advanced technology sectors that require significant and continuous capital outlays.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At the World Economic Forum, scientists warned that deaths from drug-resistant ‘superbugs,’ microbes that can withstand existing antibiotics, may soon exceed fatalities from cancer unless new treatments are found.
To address this, companies like Basecamp Research have developed AI models trained on extensive genetic and biological data to accelerate drug discovery for complex diseases, including antibiotic resistance.
These AI systems can design novel molecules predicted to be effective against resistant microbes, with early laboratory testing showing a high success rate for candidates suggested by the models.
The technology enables a user to prompt the system to design entirely new molecular structures that bacteria have never encountered, potentially yielding treatments capable of combating resistant strains.
The approach reflects a broader trend in using AI for biomedical discovery, where generative models reduce the time and cost of identifying new drug candidates. While still early and requiring further validation, such systems could reshape how antibiotics are developed, offering new tools in the fight against antimicrobial resistance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI education start-up Sparkli has raised $5 million in seed funding to develop an ‘anti-chatbot’ AI platform to transform how children engage with digital content.
Unlike traditional chatbots that focus on general conversation, Sparkli positions its AI as an interactive learning companion, guiding kids through topics such as math, science and language skills in a dynamic, age-appropriate format.
The funding will support product development, content creation and expansion into new markets. Founders say the platform addresses increasing concerns about passive screen time by offering educational interactions that blend AI responsiveness with curriculum-aligned activities.
The company emphasises safe design and parental controls to ensure technology supports learning outcomes rather than distraction.
Investors backing Sparkli see demand for responsible AI applications for children that can enhance cognition and motivation while preserving digital well-being. As schools and homes increasingly integrate AI tools, Sparkli aims to position itself at the intersection of educational technology and child-centred innovation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!