IIT Bombay and BharatGen lead AI push with cultural datasets

In a landmark effort to support AI research grounded in Indian knowledge systems, IIT Bombay has digitised 30 ancient textbooks covering topics such as astronomy, medicine and mathematics—some dating back as far as 18 centuries.

The initiative, part of the government-backed AIKosh portal, has produced a dataset comprising approximately 218,000 sentences and 1.5 million words, now available to researchers across the country.

Launched in March, AIKosh serves as a national repository for datasets, models and toolkits to foster home-grown AI innovation.

Alongside BharatGen—a consortium led by IIT Bombay and comprising IIT Kanpur, IIT Madras, IIT Hyderabad, IIT Mandi, IIM Indore and IIIT Hyderabad—the institute has contributed 37 diverse models and datasets to the platform.

These contributions include 16 culturally significant datasets from IIT Bombay alone, as well as 21 AI models from BharatGen, which is supported by the Department of Science and Technology.

Professor Ganesh Ramakrishnan, who leads the initiative, said the team is developing sovereign AI models for India, trained from scratch and not merely fine-tuned versions of existing tools.

These models aim to be data- and compute-efficient while being culturally and linguistically relevant. The collection also includes datasets for audio-visual learning—such as tutorials on organic farming and waste-to-toy creation—mathematical reasoning in Hindi and English, image-based question answering, and video-text recognition.

One dataset even features question-answering derived from the works of historian Dharampal. ‘This is about setting benchmarks for the AI ecosystem in India,’ said Ramakrishnan, noting that the resources are openly available to researchers, enterprises and academic institutions alike.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China creates AI to detect real nuclear warheads

Chinese scientists have created the world’s first AI-based system capable of identifying real nuclear warheads from decoys, marking a significant step in arms control verification.

The breakthrough, developed by the China Institute of Atomic Energy (CIAE), could strengthen Beijing’s hand in stalled disarmament talks, although it also raises difficult questions about AI’s growing role in managing weapons of mass destruction.

The technology builds on a long-standing US–China proposal but faced key obstacles: how to train AI using sensitive nuclear data, gain military approval without risking secret leaks, and persuade sceptical nations like the US to move past Cold War-era inspection methods.

So far, only the AI training has been completed, with the rest of the process still pending international acceptance.

The AI system uses deep learning and cryptographic protocols to analyse scrambled radiation signals from warheads behind a polythene wall, ensuring the weapons’ internal designs remain hidden.

The machine can verify a warhead’s chain-reaction potential without accessing classified details. According to CIAE, repeated randomised tests reduce the chance of deception to nearly zero.

While both China and the US have pledged not to let AI control nuclear launch decisions, the new system underlines AI’s expanding role in national defence.

Beijing insists the AI can be jointly trained and sealed before use to ensure transparency, but sceptics remain wary of trust, backdoor access and growing militarisation of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Uber’s product chief turns to AI for reports and research

Uber’s chief product officer, Sachin Kansal, is embracing AI to streamline his daily workflow—particularly through tools like ChatGPT, Google Gemini, and, soon, NotebookLM.

Speaking on ‘Lenny’s Podcast,’ Kansal revealed how AI summarisation helps him digest lengthy 50- to 100-page reports he otherwise wouldn’t have time to read. He uses AI to understand market trends and rider feedback across regions such as Brazil, South Korea, and South Africa.

Kansal also relies on AI as a research assistant. For instance, when exploring new driver features, he used ChatGPT’s deep research capabilities to simulate possible driver reactions and generate brainstorming ideas.

‘It’s an amazing research assistant,’ he said. ‘It’s absolutely a starting point for a brainstorm with my team.’

He’s now eyeing Google’s NotebookLM, a note-taking and research tool, as the next addition to his AI toolkit—especially its ‘Audio Overview’ feature, which turns documents into AI-generated podcast-style discussions.

Uber CEO Dara Khosrowshahi previously noted that too few of Uber’s 30,000+ employees are using AI and stressed that mastering AI tools, especially for coding, would soon be essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Students build world’s fastest Rubik’s Cube solver

A group of engineering students from Purdue University have built the world’s fastest Rubik’s Cube-solving robot, achieving a Guinness World Record time of just 0.103 seconds.

The team focused on improving nearly every aspect of the process, not only faster motors, from image capture to cube construction.

Rather than processing full images, the robot uses low-resolution cameras aimed at opposite corners of the cube, capturing only the essential parts of the image to save time.

Instead of converting camera data into full digital pictures, the system directly reads colour data to identify the cube’s layout. Although slightly less accurate, the method allows quicker recognition and faster solving.

The robot, known as Purdubik’s Cube, benefits from software designed specifically for machines, allowing it to perform overlapping turns using a technique called corner cutting. Instead of waiting for one rotation to finish, the next begins, shaving off valuable milliseconds.

To withstand the stress, the team designed a cube with extremely tight tension using reinforced nylon, making it nearly impossible to turn by hand.

High-speed motors controlled the robot’s movements, with a trapezoidal acceleration profile ensuring rapid but precise turns. The students believe the record could fall again—provided someone develops a stronger, lighter cube using materials like carbon fibre.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Serbian startup revolutionises cancer diagnostics with AI-powered radiotherapy tool

A group of Serbian physicists, programmers, and radiologists, led by Stevan Vrbaški, has developed a groundbreaking software solution through their startup, Vinaver Medical. After gaining experience through studies and research abroad, the team returned to Serbia, where they launched a project to improve cancer diagnostics using advanced radiotherapy technology.

Their work is centred on particle radiotherapy, a precise cancer treatment method that surpasses conventional X-ray-based radiotherapy. The innovation lies in software that combines AI and CT imaging to enhance diagnostic accuracy and improve the planning of radiotherapy in oncology.

Unlike traditional methods, this solution enables far more precise targeting of tumours, which can potentially reduce damage to surrounding healthy tissue. According to Vrbaški, their software helps determine the optimal delivery of radiotherapy based on patient scans.

Vinaver Medical received initial funding through the ‘Smart Start’ program and later support from the ‘Digital Serbia Initiative.’ Their product is currently being tested in the United States, the European Union, and the Balkans.

Vrbaški highlights the challenges of developing and certifying medical technologies, emphasising the need for rigorous testing, user adaptation, and risk reduction before market release. Looking ahead, the team plans to visit hospitals and innovation centres in Italy and the US to fine-tune their solution to meet user needs better.

With a certified, market-ready product in hand, they aim to launch commercially within a year to eighteen months, as testing with Dutch partners continues to validate the software’s ability to assist doctors in diagnosing various illnesses with greater accuracy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI copyright clash stalls UK data bill

A bitter standoff over AI and copyright has returned to the House of Lords, as ministers and peers clash over how to protect creative workers while fostering technological innovation.

At the centre of the debate is the proposed Data (Use and Access) Bill, which was expected to pass smoothly but is now stuck in parliamentary limbo due to growing resistance.

The bill would allow AI firms to access copyrighted material unless rights holders opt out, a proposal that many artists and peers believe threatens the UK’s £124bn creative industry.

Nearly 300 Lords have called for AI developers to disclose what content they use and seek licences instead of relying on blanket access. Former film director Baroness Kidron described the policy as ‘state-sanctioned theft’ and warned it would sacrifice British talent to benefit large tech companies.

Supporters of the bill, like former Meta executive Sir Nick Clegg, argue that forcing AI firms to seek individual permissions would severely damage the UK’s AI sector. The Department for Science, Innovation and Technology insists it will only consider changes if they are proven to benefit creators.

If no resolution is found, the bill risks being shelved entirely. That would also scrap unrelated proposals bundled into it, such as new NHS data-sharing rules and plans for a nationwide underground map.

Despite the bill’s wide scope, the fight over copyright remains its most divisive and emotionally charged feature.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gmail adds automatic AI summaries

Gmail on mobile now displays AI-generated summaries by default, marking a shift in how Google’s Gemini assistant operates within inboxes.

Instead of relying on users to request a summary, Gemini will now decide when it’s useful—typically for long email threads with multiple replies—and present a brief summary card at the top of the message.

These summaries update automatically as conversations evolve, aiming to save users from scrolling through lengthy discussions.

The feature is currently limited to mobile devices and available only to users with Google Workspace accounts, Gemini Education add-ons, or a Google One AI Premium subscription. For the moment, summaries are confined to emails written in English.

Google expects the rollout to take around two weeks, though it remains unclear when, or if, the tool will extend to standard Gmail accounts or desktop users.

Anyone wanting to opt out must disable Gmail’s smart features entirely—giving up tools like Smart Compose, Smart Reply, and package tracking in the process.

While some may welcome the convenience, others may feel uneasy about their emails being analysed by large language models, especially since this process could contribute to further training of Google’s AI systems.

The move reflects a wider trend across Google’s products, where AI is becoming central to everyday user experiences.

Additional user controls and privacy commitments

According to Google Workspace, users have some control over the summary cards. They can collapse a Gemini summary card, and it will remain collapsed for that specific email thread.

In the near future, Gmail will introduce enhancements, such as automatically collapsing future summary cards for users who consistently collapse them, until the user chooses to expand them again. For emails that don’t display automatic summaries, Gmail still offers manual options.

Users can tap the ‘summarise this email’ chip at the top of the message or use the Gemini side panel to trigger a summary manually. Google also reaffirms its commitment to data protection and user privacy. All AI features in Gmail adhere to its privacy principles, with more details available on the Privacy Hub.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI takes over eCommerce tasks as Visa and Mastercard adapt

Visa and Mastercard have announced major AI initiatives that could reshape the future of e-commerce, marking a significant step in the evolution of retail technology.

The initiatives—Visa’s Intelligent Commerce and Mastercard’s Agent Pay—move beyond traditional recommendation engines to empower AI agents to make purchases directly on behalf of consumers.

Visa is partnering with leading tech firms, including Anthropic, IBM, Microsoft, OpenAI, and Stripe, to build a system where AI agents shop according to user preferences.

Meanwhile, Mastercard’s Agent Pay integrates payment functionality into AI-driven conversational platforms, blending commerce and conversation into a seamless user experience.

These announcements follow years of AI integration into retail, with adoption growing at 40% annually and the market projected to surpass $8 billion by 2024. Retailers initially used AI for backend optimisation, but nearly 87% now apply it in customer-facing roles.

The next phase, where AI doesn’t just suggest but acts, is rapidly taking shape—backed by consumer demand for hyper-personalisation and efficiency.

Research suggests 71% of consumers want generative AI embedded in their shopping journeys, with 58% already turning to AI tools over traditional search engines for recommendations. However, consumer trust remains a challenge.

Satisfaction with AI dropped slightly last year, highlighting concerns over privacy and implementation quality—especially critical for financial transactions.

Visa and Mastercard’s moves reflect both opportunity and necessity. With 75% of retailers viewing AI agents as essential within the next year, and AI expected to handle 20% of eCommerce tasks, the payment giants are positioning themselves as indispensable infrastructure in a fast-changing market.

Their broad alliances across AI, payments, and tech underline a shared goal: to stay central as shopping behaviours evolve in the AI era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SCO members invited to join new AI cooperation plan

China has proposed the creation of an AI application centre in cooperation with member states of the Shanghai Cooperation Organization (SCO). The plan was introduced at the 2025 China-SCO AI Cooperation Forum, held in Tianjin, with the goal of deepening collaboration in AI across the region.

The proposed centre aims to support talent development, foster industrial partnerships, and promote open-source service cooperation.

Presented under the theme ‘Intelligence Converges in China, Wisdom Benefits SCO,‘ the forum brought together officials and experts to discuss practical AI cooperation and governance mechanisms that would serve the shared interests of SCO nations.

According to Huang Ru of China’s National Development and Reform Commission, closer cooperation in AI will drive economic and social growth across the SCO, reduce the digital divide, and contribute to inclusive global progress.

China reaffirmed its commitment to the ‘Shanghai Spirit’ and called for joint efforts to ensure AI development remains secure, equitable and beneficial for all member states.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How AI could quietly sabotage critical software

When Google’s Jules AI agent added a new feature to a live codebase in under ten minutes, it initially seemed like a breakthrough. But the same capabilities that allow AI tools to scan, modify, and deploy code rapidly also introduce new, troubling possibilities—particularly in the hands of malicious actors.

Experts are now voicing concern over the risks posed by hostile agents deploying AI tools with coding capabilities. If weaponised by rogue states or cybercriminals, the tools could be used to quietly embed harmful code into public or private repositories, potentially affecting millions of lines of critical software.

Even a single unnoticed line among hundreds of thousands could trigger back doors, logic bombs, or data leaks. The risk lies in how AI can slip past human vigilance.

From modifying update mechanisms to exfiltrating sensitive data or weakening cryptographic routines, the threat is both technical and psychological.

Developers must catch every mistake; an AI only needs to succeed once. As such tools become more advanced and publicly available, the conversation around safeguards, oversight, and secure-by-design principles is becoming urgent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!