The Telangana government has launched Aikam, a new autonomous body aimed at positioning the state as a global proving ground for large-scale AI deployment. Unveiled at the World Economic Forum annual meeting in Davos, the initiative is designed to consolidate state-led AI efforts and support the development, testing, and scale rollout of AI solutions.
State leaders framed the initiative as a shift away from pilot projects towards execution-focused implementation, emphasising transparency, governance, and public trust. The platform is designed to operate with agility while remaining anchored within government structures, reflecting Telangana’s ambition to rank among the world’s top 20 AI innovation hubs.
Aikam will focus on ecosystem building, including mass upskilling to create an AI-ready workforce, supporting AI startups, and strengthening collaboration among academia, research institutions, industry, and government. The state will back these efforts with access to large public datasets, enhanced computing infrastructure, and a dedicated AI Fund-of-Funds to help translate ideas into deployable solutions.
Alongside Aikam, Telangana launched the Responsible AI Standard and Ethics (RAISE) Index, a framework to measure responsible AI practices across the full AI lifecycle. Several international partnerships were also announced, covering skilling, applied research, healthcare, computing, and design, reinforcing the state’s emphasis on globally collaborative and responsible AI deployment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Indonesia is promoting blended finance as a key mechanism to meet the growing investment needs of AI and digital infrastructure. By combining public and private funding, the government aims to accelerate the development of scalable digital systems while aligning investments with sustainability goals and local capacity-building.
The rapid global expansion of AI is driving a sharp rise in demand for computing power and data centres. The government views this trend as both a strategic economic opportunity and a challenge that requires sound financial governance and well-designed policies to ensure long-term national benefits.
International financial institutions and global investors are increasingly supportive of public–private financing models. Such partnerships are seen as essential for mobilising large-scale, long-term capital and supporting the sustainable development of AI-related infrastructure in developing economies.
To attract sustained investment, the government is improving the overall investment climate through regulatory simplification, licensing reforms, integration of the Online Single Submission system, and incentives such as tax allowances and tax holidays. These measures are intended to support advanced technology sectors that require significant and continuous capital outlays.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At the World Economic Forum, scientists warned that deaths from drug-resistant ‘superbugs,’ microbes that can withstand existing antibiotics, may soon exceed fatalities from cancer unless new treatments are found.
To address this, companies like Basecamp Research have developed AI models trained on extensive genetic and biological data to accelerate drug discovery for complex diseases, including antibiotic resistance.
These AI systems can design novel molecules predicted to be effective against resistant microbes, with early laboratory testing showing a high success rate for candidates suggested by the models.
The technology enables a user to prompt the system to design entirely new molecular structures that bacteria have never encountered, potentially yielding treatments capable of combating resistant strains.
The approach reflects a broader trend in using AI for biomedical discovery, where generative models reduce the time and cost of identifying new drug candidates. While still early and requiring further validation, such systems could reshape how antibiotics are developed, offering new tools in the fight against antimicrobial resistance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI education start-up Sparkli has raised $5 million in seed funding to develop an ‘anti-chatbot’ AI platform to transform how children engage with digital content.
Unlike traditional chatbots that focus on general conversation, Sparkli positions its AI as an interactive learning companion, guiding kids through topics such as math, science and language skills in a dynamic, age-appropriate format.
The funding will support product development, content creation and expansion into new markets. Founders say the platform addresses increasing concerns about passive screen time by offering educational interactions that blend AI responsiveness with curriculum-aligned activities.
The company emphasises safe design and parental controls to ensure technology supports learning outcomes rather than distraction.
Investors backing Sparkli see demand for responsible AI applications for children that can enhance cognition and motivation while preserving digital well-being. As schools and homes increasingly integrate AI tools, Sparkli aims to position itself at the intersection of educational technology and child-centred innovation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Scientists and clinicians have created an AI model that can analyse routine abdominal imaging, such as CT scans, to identify adults at increased risk of future falls.
By detecting subtle patterns in body composition and muscle quality that may be linked to frailty, the AI system shows promise in augmenting traditional clinical assessments of fall risk.
Falls are a leading cause of injury and disability among older adults, and predicting who is most at risk can be challenging with standard clinical measures alone.
Integrating AI-based analysis with existing imaging data could enable earlier interventions, targeted therapies and personalised care plans, potentially reducing hospitalisations and long-term complications.
Although further validation is needed before routine clinical adoption, this research highlights how AI applications in medical imaging can extend beyond primary diagnosis to support predictive and preventative healthcare strategies.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A growing unease among writers is emerging as AI tools reshape how language is produced and perceived. Long-established habits, including the use of em dashes and semicolons, are increasingly being viewed with suspicion as machine-generated text becomes more common.
The concern is not opposition to AI itself, but the blurring of boundaries between human expression and automated output. Writers whose work was used to train large language models without consent say stylistic traits developed over decades are now being misread as algorithmic authorship.
Academic and editorial norms are also shifting under this pressure. Teaching practices that once valued rhythm, voice, and individual cadence are increasingly challenged by stricter stylistic rules, sometimes framed as safeguards against sloppy or machine-like writing rather than as matters of taste or craft.
At the same time, productivity tools embedded into mainstream software continue to intervene in the writing process, offering substitutions and revisions that prioritise clarity and efficiency over nuance. Such interventions risk flattening language and discouraging the idiosyncrasies that define human authorship.
As AI becomes embedded in publishing, education, and professional writing, the debate is shifting from detection to preservation. Many writers warn that protecting human voice and stylistic diversity is essential, arguing that affectless, uniform prose would erode creativity and trust.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
More than 800 creatives in the US have signed an anti-AI campaign accusing big technology companies of exploiting human work. High-profile figures from film and television in the country have backed the initiative, which argues that training AI on creative content without consent amounts to theft.
The campaign was launched by the Human Artistry Campaign, a coalition representing creators, unions and industry groups in the country. Supporters say AI systems should not be allowed to use artistic work without permission and fair compensation.
Actors and filmmakers in the US warned that unchecked AI adoption threatens livelihoods across film, television and music. Campaign organisers said innovation should not come at the expense of creators’ rights or ownership of their work.
The statement adds to growing pressure on lawmakers and technology firms in the US. Creative workers are calling for clearer rules on how AI can be developed and deployed across the entertainment industry.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
On 13 January 2026, Graham Granger, a film and performing arts major at the University of Alaska Fairbanks, was arrested and charged with criminal mischief after ripping AI-assisted artwork from a campus gallery wall and eating around 57 of the images as part of what he described as a protest and performance piece against the use of AI in art.
The destroyed exhibit, titled Shadow Searching: ChatGPT psychosis, was created by another student, Nick Dwyer, using AI to explore his personal experiences with the technology.
Dwyer criticised Granger’s actions as damaging to the artist’s work and initially considered pressing charges, though he later dropped those in favour of the state pursuing the case.
Granger defended his act as both protest and performance art, arguing that reliance on AI undermines human creativity and that the process of making art matters as much as the finished product. He said he did not regret the incident and saw it as a way to spark conversation about the role of AI in creative fields.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new analysis found Grok generated an estimated three million sexualised images in 11 days, including around 23,000 appearing to depict children. The findings raise serious concerns over safeguards, content moderation, and platform responsibility.
The surge followed the launch of Grok’s one-click image editing feature in late December, which quickly gained traction among users. Restrictions were later introduced, including paid access limits and technical measures to prevent image undressing.
Researchers based their estimates on a random sample of 20,000 images, extrapolating from these results to more than 4.6 million images generated during the study period. Automated tools and manual review identified sexualised content and confirmed cases involving individuals appearing under 18.
Campaigners have warned that the findings expose significant gaps in AI safety controls, particularly in protecting children. Calls are growing for stricter oversight, stronger accountability, and more robust safeguards before large-scale AI image deployment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Police in Japan have arrested a man accused of creating and selling non-consensual deepfake pornography using AI tools. The Tokyo Metropolitan Police Department said thousands of manipulated images of female celebrities were distributed through paid websites.
Investigators in Japan allege the suspect generated hundreds of thousands of images over two years using freely available generative AI software. Authorities say the content was promoted on social media before being sold via subscription platforms.
The arrest follows earlier cases in Japan and reflects growing concern among police worldwide. In South Korea, law enforcement has reported hundreds of arrests linked to deepfake sexual crimes, while cases have also emerged in the UK.
European agencies, including Europol, have also coordinated arrests tied to AI-generated abuse material. Law enforcement bodies say the spread of accessible AI tools is forcing rapid changes in forensic investigation and in the handling of digital evidence.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!