At the World Economic Forum, scientists warned that deaths from drug-resistant ‘superbugs,’ microbes that can withstand existing antibiotics, may soon exceed fatalities from cancer unless new treatments are found.
To address this, companies like Basecamp Research have developed AI models trained on extensive genetic and biological data to accelerate drug discovery for complex diseases, including antibiotic resistance.
These AI systems can design novel molecules predicted to be effective against resistant microbes, with early laboratory testing showing a high success rate for candidates suggested by the models.
The technology enables a user to prompt the system to design entirely new molecular structures that bacteria have never encountered, potentially yielding treatments capable of combating resistant strains.
The approach reflects a broader trend in using AI for biomedical discovery, where generative models reduce the time and cost of identifying new drug candidates. While still early and requiring further validation, such systems could reshape how antibiotics are developed, offering new tools in the fight against antimicrobial resistance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI education start-up Sparkli has raised $5 million in seed funding to develop an ‘anti-chatbot’ AI platform to transform how children engage with digital content.
Unlike traditional chatbots that focus on general conversation, Sparkli positions its AI as an interactive learning companion, guiding kids through topics such as math, science and language skills in a dynamic, age-appropriate format.
The funding will support product development, content creation and expansion into new markets. Founders say the platform addresses increasing concerns about passive screen time by offering educational interactions that blend AI responsiveness with curriculum-aligned activities.
The company emphasises safe design and parental controls to ensure technology supports learning outcomes rather than distraction.
Investors backing Sparkli see demand for responsible AI applications for children that can enhance cognition and motivation while preserving digital well-being. As schools and homes increasingly integrate AI tools, Sparkli aims to position itself at the intersection of educational technology and child-centred innovation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Scientists and clinicians have created an AI model that can analyse routine abdominal imaging, such as CT scans, to identify adults at increased risk of future falls.
By detecting subtle patterns in body composition and muscle quality that may be linked to frailty, the AI system shows promise in augmenting traditional clinical assessments of fall risk.
Falls are a leading cause of injury and disability among older adults, and predicting who is most at risk can be challenging with standard clinical measures alone.
Integrating AI-based analysis with existing imaging data could enable earlier interventions, targeted therapies and personalised care plans, potentially reducing hospitalisations and long-term complications.
Although further validation is needed before routine clinical adoption, this research highlights how AI applications in medical imaging can extend beyond primary diagnosis to support predictive and preventative healthcare strategies.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A growing unease among writers is emerging as AI tools reshape how language is produced and perceived. Long-established habits, including the use of em dashes and semicolons, are increasingly being viewed with suspicion as machine-generated text becomes more common.
The concern is not opposition to AI itself, but the blurring of boundaries between human expression and automated output. Writers whose work was used to train large language models without consent say stylistic traits developed over decades are now being misread as algorithmic authorship.
Academic and editorial norms are also shifting under this pressure. Teaching practices that once valued rhythm, voice, and individual cadence are increasingly challenged by stricter stylistic rules, sometimes framed as safeguards against sloppy or machine-like writing rather than as matters of taste or craft.
At the same time, productivity tools embedded into mainstream software continue to intervene in the writing process, offering substitutions and revisions that prioritise clarity and efficiency over nuance. Such interventions risk flattening language and discouraging the idiosyncrasies that define human authorship.
As AI becomes embedded in publishing, education, and professional writing, the debate is shifting from detection to preservation. Many writers warn that protecting human voice and stylistic diversity is essential, arguing that affectless, uniform prose would erode creativity and trust.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
More than 800 creatives in the US have signed an anti-AI campaign accusing big technology companies of exploiting human work. High-profile figures from film and television in the country have backed the initiative, which argues that training AI on creative content without consent amounts to theft.
The campaign was launched by the Human Artistry Campaign, a coalition representing creators, unions and industry groups in the country. Supporters say AI systems should not be allowed to use artistic work without permission and fair compensation.
Actors and filmmakers in the US warned that unchecked AI adoption threatens livelihoods across film, television and music. Campaign organisers said innovation should not come at the expense of creators’ rights or ownership of their work.
The statement adds to growing pressure on lawmakers and technology firms in the US. Creative workers are calling for clearer rules on how AI can be developed and deployed across the entertainment industry.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
On 13 January 2026, Graham Granger, a film and performing arts major at the University of Alaska Fairbanks, was arrested and charged with criminal mischief after ripping AI-assisted artwork from a campus gallery wall and eating around 57 of the images as part of what he described as a protest and performance piece against the use of AI in art.
The destroyed exhibit, titled Shadow Searching: ChatGPT psychosis, was created by another student, Nick Dwyer, using AI to explore his personal experiences with the technology.
Dwyer criticised Granger’s actions as damaging to the artist’s work and initially considered pressing charges, though he later dropped those in favour of the state pursuing the case.
Granger defended his act as both protest and performance art, arguing that reliance on AI undermines human creativity and that the process of making art matters as much as the finished product. He said he did not regret the incident and saw it as a way to spark conversation about the role of AI in creative fields.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new analysis found Grok generated an estimated three million sexualised images in 11 days, including around 23,000 appearing to depict children. The findings raise serious concerns over safeguards, content moderation, and platform responsibility.
The surge followed the launch of Grok’s one-click image editing feature in late December, which quickly gained traction among users. Restrictions were later introduced, including paid access limits and technical measures to prevent image undressing.
Researchers based their estimates on a random sample of 20,000 images, extrapolating from these results to more than 4.6 million images generated during the study period. Automated tools and manual review identified sexualised content and confirmed cases involving individuals appearing under 18.
Campaigners have warned that the findings expose significant gaps in AI safety controls, particularly in protecting children. Calls are growing for stricter oversight, stronger accountability, and more robust safeguards before large-scale AI image deployment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Police in Japan have arrested a man accused of creating and selling non-consensual deepfake pornography using AI tools. The Tokyo Metropolitan Police Department said thousands of manipulated images of female celebrities were distributed through paid websites.
Investigators in Japan allege the suspect generated hundreds of thousands of images over two years using freely available generative AI software. Authorities say the content was promoted on social media before being sold via subscription platforms.
The arrest follows earlier cases in Japan and reflects growing concern among police worldwide. In South Korea, law enforcement has reported hundreds of arrests linked to deepfake sexual crimes, while cases have also emerged in the UK.
European agencies, including Europol, have also coordinated arrests tied to AI-generated abuse material. Law enforcement bodies say the spread of accessible AI tools is forcing rapid changes in forensic investigation and in the handling of digital evidence.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Stanford University, ETH Zurich, and EPFL have launched a transatlantic partnership to develop open-source AI models prioritising societal values over commercial interests.
The partnership was formalised through a memorandum of understanding signed during the World Economic Forum meeting in Davos.
The agreement establishes long-term cooperation in AI research, education, and innovation, with a focus on large-scale multimodal models. The initiative aims to strengthen academia’s influence over global AI by promoting transparency, accountability, and inclusive access.
Joint projects will develop open datasets, evaluation benchmarks, and responsible deployment frameworks, alongside researcher exchanges and workshops. The effort aims to embed human-centred principles into technical progress while supporting interdisciplinary discovery.
Academic leaders said the alliance reinforces open science and cultural diversity amid growing corporate influence over foundation models. The collaboration positions universities as central drivers of ethical, trustworthy, and socially grounded AI development.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has expanded AI Search with Personal Intelligence, enabling more personalised responses using Gmail and Google Photos data. The feature aims to combine global information with individual context to deliver search results tailored to each user.
Eligible Google AI Pro and AI Ultra subscribers can opt in to securely connect their Gmail and Photos accounts, allowing Search to draw on personal preferences, travel plans, purchases, and memories.
The system uses contextual insights to generate recommendations that reflect users’ habits, interests, and upcoming activities.
Personal Intelligence enhances shopping, travel planning, and lifestyle discovery by anticipating needs and offering customised suggestions. Privacy controls remain central, with users able to manage data connections and turn off personal context at any time.
The feature is launching as an experimental Labs release for English-language users in the United States, with broader availability expected following testing. Google said ongoing feedback will guide refinements as the system continues to evolve.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!