AI becomes a new spiritual guide for worshippers in India

Across India, a growing number of worshippers are using AI for spiritual guidance. From chatbots like GitaGPT to robotic deities in temples, technology is changing how people connect with faith.

Apps trained on Hindu scriptures offer personalised advice, often serving as companions for those seeking comfort and purpose in a rapidly changing world.

Developers such as Vikas Sahu have built AI chatbots based on the Bhagavad Gita, attracting thousands of users in just days. Major organisations like the Isha Foundation have also adopted AI to deliver ancient wisdom through modern apps, blending spiritual teachings with accessibility.

Large religious gatherings, including the Maha Kumbh Mela, now use AI tools and virtual reality to guide and connect millions of devotees.

While many find inspiration in AI-guided spirituality, experts warn of ethical and cultural challenges. Anthropologist Holly Walters notes that users may perceive AI-generated responses as divine truth, which could distort traditional belief systems.

Oxford researcher Lyndon Drake adds that AI might challenge the authority of religious leaders, as algorithms shape interpretations of sacred texts.

Despite the risks, faith-driven AI continues to thrive. For some devotees, digital gods and chatbots offer something traditional structures often cannot- immediate, non-judgemental access to spiritual guidance at any time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA and TSMC celebrate first US-made Blackwell AI chip

A collaboration between NVIDIA and TSMC has marked a historic milestone with the first NVIDIA Blackwell wafer produced on US soil.

The event, held at TSMC’s facility in Phoenix, symbolised the start of volume production for the Blackwell architecture and a major step toward domestic AI chip manufacturing.

NVIDIA’s CEO Jensen Huang described it as a moment that brings advanced technology and industrial strength back to the US.

A partnership that highlights how the companies aim to strengthen the US’s semiconductor supply chain by producing the world’s most advanced chips domestically.

TSMC Arizona will manufacture next-generation two-, three- and four-nanometre technologies, crucial for AI, telecommunications, and high-performance computing. The process transforms raw wafers through layering, etching, and patterning into the high-speed processors driving the AI revolution.

TSMC executives praised the achievement as the result of decades of partnership with NVIDIA, built on innovation and technical excellence.

Both companies believe that local chip production will help meet the rising global demand for AI infrastructure while securing the US’s strategic position in advanced technology manufacturing.

NVIDIA also plans to use its AI, robotics, and digital twin platforms to design and manage future American facilities, deepening its commitment to domestic production.

The companies say their shared investment signals a long-term vision of sustainable innovation, industrial resilience, and technological leadership for the AI era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Harvard’s health division supports AI-powered medical learning

Harvard Health Publishing has partnered with Microsoft to use its health content to train the Copilot AI system. The collaboration seeks to enhance the accuracy of healthcare responses on Microsoft’s AI platform, according to the Wall Street Journal.

HHP publishes consumer health resources reviewed by Harvard scientists, covering topics such as sleep, nutrition, and pain management. The institution confirmed that Microsoft has paid to license its articles, expanding a previous agreement made in 2022.

The move is designed to make medically verified information more accessible to the public through Copilot, which now reaches over 33 million users.

Harvard’s Soroush Saghafian said the deal could help cut errors in AI-generated medical advice, a key concern in healthcare. He emphasised the importance of rigorous testing before deployment, warning that unverified tools could pose serious risks to users.

Harvard continues to invest in AI research and integration across its academic programmes. Recent initiatives include projects to address bias in medical training and studies exploring AI’s role in drug development and cancer treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta previews parental controls over teen AI character chats

Meta has previewed upcoming parental control features for its AI experiences, particularly aimed at teens’ interactions with AI characters. The new tools are expected to roll out next year.

Under the proposed controls, parents will be able to turn off chats between teens and AI characters altogether, though the broader Meta AI chatbot remains accessible. They can also block specific characters if they wish. Parents will receive topic summaries of what teens are discussing with AI characters and with Meta AI itself.

The first deployment will be on Instagram, with initial availability in English for the US, UK, Canada and Australia. Meta says it recognises the challenges parents face in guiding children through new technology, and wants these tools to simplify oversight.

Meta also notes that AI content and experiences intended for teens will follow a PG-13 standard: avoiding extreme violence, nudity and graphic drug content. Teens currently interact with only a limited set of AI characters under age-appropriate guidelines.

Additionally, Meta plans to allow time limits on AI character use by teens. The company is also detecting and discouraging attempts by users to falsify their age to bypass restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta to pull all political ads in EU ahead of new transparency law

Meta Platforms has said it will stop selling and showing political, electoral and social issue advertisements across its services in the European Union from early October 2025. The decision follows the EU’s Transparency and Targeting of Political Advertising (TTPA) regulation coming into full effect on 10 October.

Under TTPA, platforms will be required to clearly label political ads, disclose the sponsor, the election or social issue at hand, the amounts paid, and how the ads are targeted. These obligations also include strict conditions on targeting and require explicit consent for certain data use.

Meta called the requirements ‘significant operational challenges and legal uncertainties’ and labelled parts of the new rules ‘unworkable’ for advertisers and platforms. It said that personalised ads are widely used for issue-based campaigns and that limiting them might restrict how people access political or social issue-related information.

The company joins Google, which made a similar move last year citing comparable concerns about TTPA compliance.

While political ads will be banned under paid formats, Meta says organic political content (e.g. users posting or sharing political views) will still be permitted.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AWS glitch triggers widespread outages across major apps

A major internet outage hit some of the world’s biggest apps and sites from about 9 a.m. CET Monday, with issues traced to Amazon Web Services. Tracking sites reported widespread failures across the US and beyond, disrupting consumer and enterprise services.

AWS cited ‘significant error rates’ in DynamoDB requests in the US-EAST-1 region, impacting additional services in Northern Virginia. Engineers are mitigating while investigating root cause, and some customers couldn’t create or update Support Cases.

Outages clustered around Virginia’s dense data-centre corridor but rippled globally. Impacted brands included Amazon, Google, Snapchat, Roblox, Fortnite, Canva, Coinbase, Slack, Signal, Vodafone and the UK tax authority HMRC.

Coinbase told users ‘all funds are safe’ as platforms struggled to authenticate, fetch data and serve content tied to affected back-ends. Third-party monitors noted elevated failure rates across APIs and app logins.

The incident underscores heavy reliance on hyperscale infrastructure and the blast radius when core data services falter. Full restoration and a formal post-mortem are pending from AWS.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Data labelling transforms rural economies in Tamil Nadu

India’s small towns are fast becoming global hubs for AI training and data labelling, as outsourcing firms move operations beyond major cities like Bangalore and Chennai. Lower costs and improved connectivity have driven a trend known as cloud farming, which has transformed rural employment.

In Tamil Nadu, workers annotate and train AI models for global clients, preparing data that helps machines recognise objects, text and speech. Firms like Desicrew pioneered this approach by offering digital careers close to home, reducing migration to cities while maintaining high technical standards.

Desicrew’s chief executive, Mannivannan J K, says about a third of the company’s projects already involve AI, a figure expected to reach nearly all within two years. Much of the work focuses on transcription, building multilingual datasets that teach machines to interpret diverse human voices and dialects.

Analysts argue that cloud farming could make rural India the world’s largest AI operations base, much as it once dominated IT outsourcing. Yet challenges remain around internet reliability, data security and client confidence.

For workers like Dhanalakshmi Vijay, who fine-tunes models by correcting their errors, the impact feels tangible. Her adjustments, she says, help AI systems perform better in real-world applications, improving everything from shopping recommendations to translation tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Public consultation: EU clarifies how DMA and GDPR work together

The European Commission and European Data Protection Board have jointly published long-awaited guidelines clarifying how the Digital Markets Act aligns with the GDPR. It aims to remove uncertainty for large online platforms over consent requirements, data sharing amongst other things.

Under the new interpretation, gatekeepers must obtain specific and separate consent when combining user data across different services, including when using it for AI training. They cannot rely on legitimate interest or contractual necessity for such processing, closing a loophole long debated in EU privacy law.

The Guidelines also set limits on how often consent can be re-requested, prohibiting repeated or slightly altered requests for the same purpose within a year. In addition, they make clear that offering users a binary choice between accepting tracking or paying a fee will rarely qualify as freely given consent.

The Guidance also introduces a practical standard for anonymisation, requiring platforms to prevent re-identification using technical and organisational safeguards. Consultation on the Guidelines runs until 4 December 2025, after which they are expected to shape future enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI system could help reduce childhood obesity risk

Researchers at Penn State have developed an AI model that measures children’s bite rate during meals, aiming to address a key risk factor for obesity. Eating quickly hinders fullness signals and, combined with larger bites, increases the risk of obesity.

The AI system, named ByteTrack, was trained using over 1,400 minutes of video from a study of 94 children aged seven to nine. It recognises children’s faces with 97% accuracy and detects bites about 70% as successfully as humans.

Although the system requires further refinement, the pilot study shows promise for large-scale research and potential real-world applications. With further training, ByteTrack could become a smartphone app alerting children when they eat too quickly to encourage healthier habits.

The research was funded by the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institute of General Medical Sciences, and Penn State’s computational and clinical research institutes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI and fusion combine to accelerate clean energy breakthroughs

A new research partnership between Google and Commonwealth Fusion Systems (CFS) aims to accelerate the development of clean, abundant fusion energy. Fusion powers the sun and offers limitless, clean energy, but achieving it on Earth requires stabilising plasma at over 100 million degrees Celsius.

The collaboration builds on prior AI research in controlling plasma using deep reinforcement learning. Google and CFS are combining AI with the SPARC tokamak, using superconducting magnets to achieve net energy gain from fusion.

AI tools such as TORAX, a fast and differentiable plasma simulator, allow millions of virtual experiments to optimise plasma behaviour before SPARC begins operations.

AI is also being applied to find the most efficient operating paths for the tokamak, including optimising magnetic coils, fuel injection, and heat management.

Reinforcement learning agents can optimise energy output in real time while safeguarding the machine, potentially exceeding human-designed methods.

The partnership combines advanced AI with fusion hardware to develop intelligent, adaptive control systems for future clean and sustainable fusion power plants.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot