India deploys AI to modernise its military operations

In a move reflecting its growing strategic ambitions, India is rapidly implementing AI across its defence forces. The country’s military has moved from policy to practice, using tools from real-time sensor fusion to predictive maintenance to transform how it fights.

The shift has involved institutional change. India’s Defence AI Council and Defence AI Project Agency (established 2019) are steering an ecosystem that includes labs such as the Centre for Artificial Intelligence & Robotics of the Defence Research and Development Organisation (DRDO).

One recent example is the cross-border operation Operation Sindoor (May 2025), in which AI-driven platforms appeared in roles ranging from intelligence analysis to operational coordination.

This effort signals more than just a technological upgrade. It underscores a shift in warfare logic, where systems of systems, connectivity and rapid decision-making matter more than sheer numbers.

India’s incorporation of AI into its capabilities, drone swarming, combat simulation and logistics optimisation, is aligned with broader trends in defence innovation and digital diplomacy. The country’s strategy now places AI at the heart of its procurement demands and force design.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Global robotaxi push gets Foxconn boost

In a joint effort, Foxconn announced it will work with NVIDIA Corporation, Stellantis N.V. and Uber Technologies, Inc. on developing and deploying Level 4 (hands-off, eyes-off) autonomous vehicles for robotaxi services. Foxconn brings its expertise in high-performance computing, sensor integration and electronic control systems to the partnership.

The collaboration assigns distinct roles. Nvidia contributes its DRIVE AV software stack and DRIVE AGX Hyperion 10 architecture, Stellantis provides vehicle platforms engineered for autonomy, Foxconn handles hardware and system integration, and Uber offers its global ride-service network to scale the deployment.

Foxconn chairman Young Liu described autonomous mobility as a strategic priority within its EV programme, while Nvidia CEO Jensen Huang said this venture ‘is a leap in AI capability’.

This move underscores how hardware makers, AI firms and mobility service providers are converging around the autonomous-vehicle ecosystem.

It also highlights the expanding role of companies like Foxconn beyond traditional electronics manufacturing into mobility, AI and sensor integration, areas increasingly relevant for digital diplomacy, supply-chain resilience and global tech competition.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Alliance science pact lifts US–Korea cooperation on AI, quantum, 6G, and space

The United States and South Korea agreed on a broad science and technology memorandum to deepen alliance ties and bolster Indo-Pacific stability. The non-binding pact aims to accelerate innovation while protecting critical capabilities. Both sides cast it as groundwork for a new Golden Age of Innovation.

AI sits at the centre. Plans include pro-innovation policy alignment, trusted exports across the stack, AI-ready datasets, safety standards, and enforcement of compute protection. Joint metrology and standards work links the US Center for AI Standards and Innovation with the AI Safety Institute of South Korea.

Trusted technology leadership extends beyond AI. The memorandum outlines shared research security, capacity building for universities and industry, and joint threat analysis. Telecommunications cooperation targets interoperable 6G supply chains and coordinated standards activity with industry partners.

Quantum and basic research are priority growth areas. Participants plan interoperable quantum standards, stronger institutional partnerships, and secured supply chains. Larger projects and STEM exchanges aim to widen collaboration, supported by shared roadmaps and engagement in global consortia.

Space cooperation continues across civil and exploration programmes. Strands include Artemis contributions, a Korean cubesat rideshare on Artemis II, and Commercial Lunar Payload Services. The Korea Positioning System will be developed for maximum interoperability with GPS.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia founder questions Musk’s Grokipedia accuracy

Speaking at the CNBC Technology Executive Council Summit in New York, Wikipedia founder Jimmy Wales has expressed scepticism about Elon Musk’s new AI-powered Grokipedia, suggesting that large language models cannot reliably produce accurate wiki entries.

Wales highlighted the difficulties of verifying sources and warned that AI tools can produce plausible but incorrect information, citing examples where chatbots fabricated citations and personal details.

He rejected Musk’s claims of liberal bias on Wikipedia, noting that the site prioritises reputable sources over fringe opinions. Wales emphasised that focusing on mainstream publications does not constitute political bias but preserves trust and reliability for the platform’s vast global audience.

Despite his concerns, Wales acknowledged that AI could have limited utility for Wikipedia in uncovering information within existing sources.

However, he stressed that substantial costs and potential errors prevent the site from entirely relying on generative AI, preferring careful testing before integrating new technologies.

Wales concluded that while AI may mislead the public with fake or plausible content, the Wiki community’s decades of expertise in evaluating information help safeguard accuracy. He urged continued vigilance and careful source evaluation as misinformation risks grow alongside AI capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China outlines plan to expand high-tech industries

China has pledged to expand its high-tech industries over the next decade. Officials said emerging sectors such as quantum computing, hydrogen energy, nuclear fusion, and brain-computer interfaces will receive major investment and policy backing.

Development chief Zheng Shanjie told reporters that the coming decade will redefine China’s technology landscape, describing it as a ‘new scale’ of innovation. The government views breakthroughs in science and AI as key to boosting economic resilience amid a slowing property market and demographic decline.

The plan underscores Beijing’s push to rival Washington in cutting-edge technology, with billions already channelled into state-led innovation programmes. Public opinion in Beijing appears supportive, with many citizens expressing optimism that China could lead the next technological revolution.

Economists warn, however, that sustained progress will require tackling structural issues, including low domestic consumption and reduced investor confidence. Analysts said Beijing’s long-term success will depend on whether it can balance rapid growth with stable governance and transparent regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Spot the red flags of AI-enabled scams, says California DFPI

The California Department of Financial Protection & Innovation (DFPI) has warned that criminals are weaponising AI to scam consumers. Deepfakes, cloned voices, and slick messages mimic trusted people and exploit urgency. Learning the new warning signs cuts risk quickly.

Imposter deepfakes and romance ruses often begin with perfect profiles or familiar voices pushing you to pay or invest. Grandparent scams use cloned audio in fake emergencies; agree a family passphrase and verify on a separate channel. Influencers may flaunt fabricated credentials and followers.

Automated attacks now use AI to sidestep basic defences and steal passwords or card details. Reduce exposure with two-factor authentication, regular updates, and a reputable password manager. Pause before clicking unexpected links or attachments, even from known names.

Investment frauds increasingly tout vague ‘AI-powered’ returns while simulating growth and testimonials, then blocking withdrawals. Beware guarantees of no risk, artificial deadlines, unsolicited messages, and recruit-to-earn offers. Research independently and verify registrations before sending money.

DFPI advises careful verification before acting. Confirm identities through trusted channels, refuse to move money under pressure, and secure devices. Report suspicious activity promptly; smart habits remain the best defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Emergency cardiology gets a lift from AI-read ECGs, with fewer false activations

AI ECG analysis improved heart attack detection and reduced false alarms in a multicentre study of 1,032 suspected STEMI cases. Conducted across three primary PCI centres from January 2020 to May 2024, it points to quicker, more accurate triage, especially beyond specialist hospitals.

ST-segment elevation myocardial infarction occurs when a major coronary artery is blocked. Guideline targets call for reperfusion within 90 minutes of first medical contact. Longer delays are associated with roughly a 3-fold increase in mortality, underscoring the need for rapid, reliable activation.

The AI ECG model, trained to detect acute coronary occlusion and STEMI equivalents, analysed each patient’s initial tracing. Confirmatory angiography and biomarkers identified 601 true STEMIs and 431 false positives. AI detected 553 of 601 STEMIs, versus 427 identified by standard triage on the first ECG.

False positives fell sharply with AI. Investigators reported a 7.9 percent false-positive rate with the model, compared with 41.8 percent under standard protocols. Clinicians said earlier that more precise identification could streamline transfers from non-PCI centres and help teams reach reperfusion targets.

An editorial welcomed the gains but urged caution. The model targets acute occlusion rather than STEMI, needs prospective validation in diverse populations, and must be integrated with clear governance and human oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Most transformative decade begins as Kurzweil’s AI vision unfolds

AI no longer belongs to speculative fiction or distant possibility. In many ways, it has arrived. From machine translation and real-time voice synthesis to medical diagnostics and language generation, today’s systems perform tasks once reserved for human cognition. For those watching closely, this shift feels less like a surprise and more like a milestone reached.

Ray Kurzweil, one of the most prominent futurists of the past half-century, predicted much of what is now unfolding. In 1999, his book The Age of Spiritual Machines laid a roadmap for how computers would grow exponentially in power and eventually match and surpass human capabilities. Over two decades later, many of his projections for the 2020s have materialised with unsettling accuracy.

The futurist who measured the future

Kurzweil’s work stands out not only for its ambition but for its precision. Rather than offering vague speculation, he produced a set of quantifiable predictions, 147 in total, with a claimed accuracy rate of over 85 percent. These ranged from the growth of mobile computing and cloud-based storage to real-time language translation and the emergence of AI companions.

Since 2012, he has worked at Google as Director of Engineering, contributing to developing natural language understanding systems. He believes is that exponential growth in computing power, driven by Moore’s Law and its successors, will eventually transform our tools and biology.

Reprogramming the body with code

One of Kurzweil’s most controversial but recurring ideas is that human ageing is, at its core, a software problem. He believes that by the early 2030s, advancements in biotechnology and nanomedicine could allow us to repair or even reverse cellular damage.

The logic is straightforward: if ageing results from accumulated biological errors, then precise intervention at the molecular level might prevent those errors or correct them in real time.

AI adoption among US firms with over 250 employees fell to under 12 per cent in August, the largest drop since the Census Bureau began tracking in 2023.

Some of these ideas are already being tested, though results remain preliminary. For now, claims about extending life remain speculative, but the research trend is real.

Kurzweil’s perspective places biology and computation on a converging path. His view is not that we will become machines, but that we may learn to edit ourselves with the same logic we use to program them.

The brain, extended

Another key milestone in Kurzweil’s roadmap is merging biological and digital intelligence. He envisions a future where nanorobots circulate through the bloodstream and connect our neurons directly to cloud-based systems. In this vision, the brain becomes a hybrid processor, part organic, part synthetic.

By the mid-2030s, he predicts we may no longer rely solely on internal memory or individual thought. Instead, we may access external information, knowledge, and computation in real time. Some current projects, such as brain–computer interfaces and neuroprosthetics, point in this direction, but remain in early stages of development.

Kurzweil frames this not as a loss of humanity but as an expansion of its potential.

The singularity hypothesis

At the centre of Kurzweil’s long-term vision lies the idea of a technological singularity. By 2045, he believes AI will surpass the combined intelligence of all humans, leading to a phase shift in human evolution. However, this moment, often misunderstood, is not a single event but a threshold after which change accelerates beyond human comprehension.

Human like robot and artificial intelligence

The singularity, in Kurzweil’s view, does not erase humanity. Instead, it integrates us into a system where biology no longer limits intelligence. The implications are vast, from ethics and identity to access and inequality. Who participates in this future, and who is left out, remains an open question.

Between vision and verification

Critics often label Kurzweil’s forecasts as too optimistic or detached from scientific constraints. Some argue that while trends may be exponential, progress in medicine, cognition, and consciousness cannot be compressed into neat timelines. Others worry about the philosophical consequences of merging with machines.

Still, it is difficult to ignore the number of predictions that have already come true. Kurzweil’s strength lies not in certainty, but in pattern recognition. His work forces a reckoning with what might happen if the current pace of change continues unchecked.

Whether or not we reach the singularity by 2045, the present moment already feels like the future he described.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA expands open-source AI models to boost global innovation

The US tech giant, NVIDIA, has released open-source AI models and data tools across language, biology and robotics to accelerate innovation and expand access to cutting-edge research.

New model families, Nemotron, Cosmos, Isaac GR00T and Clara, are designed to empower developers to build intelligent agents and applications with enhanced reasoning and multimodal capabilities.

The company is contributing these open models and datasets to Hugging Face, further solidifying its position as a leading supporter of open research.

Nemotron models improve reasoning for digital AI agents, while Cosmos and Isaac GR00T enable physical AI and robotic systems to perform complex simulations and behaviours. Clara advances biomedical AI, allowing scientists to analyse RNA, generate 3D protein structures and enhance medical imaging.

Major industry partners, including Amazon Robotics, ServiceNow, Palantir and PayPal, are already integrating NVIDIA’s technologies to develop next-generation AI agents.

An initiative that reflects NVIDIA’s aim to create an open ecosystem that supports both enterprise and scientific innovation through accessible, transparent and responsible AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Labels press platforms to curb AI slop and protect artists

Luke Temple woke to messages about a new Here We Go Magic track he never made. An AI-generated song appeared on the band’s Spotify, Tidal, and YouTube pages, triggering fresh worries about impersonation as cheap tools flood platforms.

Platforms say defences are improving. Spotify confirmed the removal of the fake track and highlighted new safeguards against impersonation, plus a tool to flag mismatched releases pre-launch. Tidal said it removed the song and is upgrading AI detection. YouTube did not comment.

Industry teams describe a cat-and-mouse race. Bad actors exploit third-party distributors with light verification, slipping AI pastiches into official pages. Tools like Suno and Udio enable rapid cloning, encouraging volume spam that targets dormant and lesser-known acts.

Per-track revenue losses are tiny, reputational damage is not. Artists warn that identity theft and fan confusion erode trust, especially when fakes sit beside legitimate catalogues or mimic deceased performers. Labels caution that volume is outpacing takedowns across major services.

Proposed fixes include stricter distributor onboarding, verified artist controls, watermark detection, and clear AI labels for listeners. Rights holders want faster escalation and penalties for repeat offenders. Musicians monitor profiles and report issues, yet argue platforms must shoulder the heavier lift.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!