OpenAI’s latest GPT-5.2 model has sparked concern after repeatedly citing Grokipedia, an AI-generated encyclopaedia launched by Elon Musk’s xAI, raising fresh fears of misinformation amplification.
Testing by The Guardian showed the model referencing Grokipedia multiple times when answering questions on geopolitics and historical figures.
Launched in October 2025, the AI-generated platform rivals Wikipedia but relies solely on automated content without human editing. Critics warn that limited human oversight raises risks of factual errors and ideological bias, as Grokipedia faces criticism for promoting controversial narratives.
OpenAI said its systems use safety filters and diverse public sources, while xAI dismissed the concerns as media distortion. The episode deepens scrutiny of AI-generated knowledge platforms amid growing regulatory and public pressure for transparency and accountability.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers and free-speech advocates are warning that coordinated swarms of AI agents could soon be deployed to manipulate public opinion at a scale capable of undermining democratic systems.
According to a consortium of academics from leading universities, advances in generative and agentic AI now enable large numbers of human-like bots to infiltrate online communities and autonomously simulate organic political discourse.
Unlike earlier forms of automated misinformation, AI swarms are designed to adapt to social dynamics, learn community norms and exchange information in pursuit of a shared objective.
By mimicking human behaviour and spreading tailored narratives gradually, such systems could fabricate consensus, amplify doubt around electoral processes and normalise anti-democratic outcomes without triggering immediate detection.
Evidence of early influence operations has already emerged in recent elections across Asia, where AI-driven accounts have engaged users with large volumes of unverifiable information rather than overt propaganda.
Researchers warn that information overload, strategic neutrality and algorithmic amplification may prove more effective than traditional disinformation campaigns.
The authors argue that democratic resilience now depends on global coordination, combining technical safeguards such as watermarking and detection tools with stronger governance of political AI use.
Without collective action, they caution that AI-enabled manipulation risks outpacing existing regulatory and institutional defences.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Korea has moved towards regulatory action against Grok, the generative AI chatbot developed by xAI, following allegations that the system was used to generate and distribute sexually exploitative deepfake images.
The country’s Personal Information Protection Commission has launched a preliminary fact-finding review to assess whether violations occurred and whether the matter falls within its legal remit.
The review follows international reports accusing Grok of facilitating the creation of explicit and non-consensual images of real individuals, including minors.
Under the Personal Information Protection Act of South Korea, generating or altering sexual images of identifiable people without consent may constitute unlawful handling of personal data, exposing providers to enforcement action.
Concerns have intensified after civil society groups estimated that millions of explicit images were produced through Grok over a short period, with thousands involving children.
Several governments, including those in the US, Europe and Canada, have opened inquiries, while parts of Southeast Asia have opted to block access to the service altogether.
In response, xAI has introduced technical restrictions preventing users from generating or editing images of real people. Korean regulators have also demanded stronger youth protection measures from X, warning that failure to address criminal content involving minors could result in administrative penalties.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Telangana government has launched Aikam, a new autonomous body aimed at positioning the state as a global proving ground for large-scale AI deployment. Unveiled at the World Economic Forum annual meeting in Davos, the initiative is designed to consolidate state-led AI efforts and support the development, testing, and scale rollout of AI solutions.
State leaders framed the initiative as a shift away from pilot projects towards execution-focused implementation, emphasising transparency, governance, and public trust. The platform is designed to operate with agility while remaining anchored within government structures, reflecting Telangana’s ambition to rank among the world’s top 20 AI innovation hubs.
Aikam will focus on ecosystem building, including mass upskilling to create an AI-ready workforce, supporting AI startups, and strengthening collaboration among academia, research institutions, industry, and government. The state will back these efforts with access to large public datasets, enhanced computing infrastructure, and a dedicated AI Fund-of-Funds to help translate ideas into deployable solutions.
Alongside Aikam, Telangana launched the Responsible AI Standard and Ethics (RAISE) Index, a framework to measure responsible AI practices across the full AI lifecycle. Several international partnerships were also announced, covering skilling, applied research, healthcare, computing, and design, reinforcing the state’s emphasis on globally collaborative and responsible AI deployment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Indonesia is promoting blended finance as a key mechanism to meet the growing investment needs of AI and digital infrastructure. By combining public and private funding, the government aims to accelerate the development of scalable digital systems while aligning investments with sustainability goals and local capacity-building.
The rapid global expansion of AI is driving a sharp rise in demand for computing power and data centres. The government views this trend as both a strategic economic opportunity and a challenge that requires sound financial governance and well-designed policies to ensure long-term national benefits.
International financial institutions and global investors are increasingly supportive of public–private financing models. Such partnerships are seen as essential for mobilising large-scale, long-term capital and supporting the sustainable development of AI-related infrastructure in developing economies.
To attract sustained investment, the government is improving the overall investment climate through regulatory simplification, licensing reforms, integration of the Online Single Submission system, and incentives such as tax allowances and tax holidays. These measures are intended to support advanced technology sectors that require significant and continuous capital outlays.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At the World Economic Forum, scientists warned that deaths from drug-resistant ‘superbugs,’ microbes that can withstand existing antibiotics, may soon exceed fatalities from cancer unless new treatments are found.
To address this, companies like Basecamp Research have developed AI models trained on extensive genetic and biological data to accelerate drug discovery for complex diseases, including antibiotic resistance.
These AI systems can design novel molecules predicted to be effective against resistant microbes, with early laboratory testing showing a high success rate for candidates suggested by the models.
The technology enables a user to prompt the system to design entirely new molecular structures that bacteria have never encountered, potentially yielding treatments capable of combating resistant strains.
The approach reflects a broader trend in using AI for biomedical discovery, where generative models reduce the time and cost of identifying new drug candidates. While still early and requiring further validation, such systems could reshape how antibiotics are developed, offering new tools in the fight against antimicrobial resistance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI education start-up Sparkli has raised $5 million in seed funding to develop an ‘anti-chatbot’ AI platform to transform how children engage with digital content.
Unlike traditional chatbots that focus on general conversation, Sparkli positions its AI as an interactive learning companion, guiding kids through topics such as math, science and language skills in a dynamic, age-appropriate format.
The funding will support product development, content creation and expansion into new markets. Founders say the platform addresses increasing concerns about passive screen time by offering educational interactions that blend AI responsiveness with curriculum-aligned activities.
The company emphasises safe design and parental controls to ensure technology supports learning outcomes rather than distraction.
Investors backing Sparkli see demand for responsible AI applications for children that can enhance cognition and motivation while preserving digital well-being. As schools and homes increasingly integrate AI tools, Sparkli aims to position itself at the intersection of educational technology and child-centred innovation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Scientists and clinicians have created an AI model that can analyse routine abdominal imaging, such as CT scans, to identify adults at increased risk of future falls.
By detecting subtle patterns in body composition and muscle quality that may be linked to frailty, the AI system shows promise in augmenting traditional clinical assessments of fall risk.
Falls are a leading cause of injury and disability among older adults, and predicting who is most at risk can be challenging with standard clinical measures alone.
Integrating AI-based analysis with existing imaging data could enable earlier interventions, targeted therapies and personalised care plans, potentially reducing hospitalisations and long-term complications.
Although further validation is needed before routine clinical adoption, this research highlights how AI applications in medical imaging can extend beyond primary diagnosis to support predictive and preventative healthcare strategies.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A growing unease among writers is emerging as AI tools reshape how language is produced and perceived. Long-established habits, including the use of em dashes and semicolons, are increasingly being viewed with suspicion as machine-generated text becomes more common.
The concern is not opposition to AI itself, but the blurring of boundaries between human expression and automated output. Writers whose work was used to train large language models without consent say stylistic traits developed over decades are now being misread as algorithmic authorship.
Academic and editorial norms are also shifting under this pressure. Teaching practices that once valued rhythm, voice, and individual cadence are increasingly challenged by stricter stylistic rules, sometimes framed as safeguards against sloppy or machine-like writing rather than as matters of taste or craft.
At the same time, productivity tools embedded into mainstream software continue to intervene in the writing process, offering substitutions and revisions that prioritise clarity and efficiency over nuance. Such interventions risk flattening language and discouraging the idiosyncrasies that define human authorship.
As AI becomes embedded in publishing, education, and professional writing, the debate is shifting from detection to preservation. Many writers warn that protecting human voice and stylistic diversity is essential, arguing that affectless, uniform prose would erode creativity and trust.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
More than 800 creatives in the US have signed an anti-AI campaign accusing big technology companies of exploiting human work. High-profile figures from film and television in the country have backed the initiative, which argues that training AI on creative content without consent amounts to theft.
The campaign was launched by the Human Artistry Campaign, a coalition representing creators, unions and industry groups in the country. Supporters say AI systems should not be allowed to use artistic work without permission and fair compensation.
Actors and filmmakers in the US warned that unchecked AI adoption threatens livelihoods across film, television and music. Campaign organisers said innovation should not come at the expense of creators’ rights or ownership of their work.
The statement adds to growing pressure on lawmakers and technology firms in the US. Creative workers are calling for clearer rules on how AI can be developed and deployed across the entertainment industry.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!