Across India, a growing number of worshippers are using AI for spiritual guidance. From chatbots like GitaGPT to robotic deities in temples, technology is changing how people connect with faith.
Apps trained on Hindu scriptures offer personalised advice, often serving as companions for those seeking comfort and purpose in a rapidly changing world.
Developers such as Vikas Sahu have built AI chatbots based on the Bhagavad Gita, attracting thousands of users in just days. Major organisations like the Isha Foundation have also adopted AI to deliver ancient wisdom through modern apps, blending spiritual teachings with accessibility.
Large religious gatherings, including the Maha Kumbh Mela, now use AI tools and virtual reality to guide and connect millions of devotees.
While many find inspiration in AI-guided spirituality, experts warn of ethical and cultural challenges. Anthropologist Holly Walters notes that users may perceive AI-generated responses as divine truth, which could distort traditional belief systems.
Oxford researcher Lyndon Drake adds that AI might challenge the authority of religious leaders, as algorithms shape interpretations of sacred texts.
Despite the risks, faith-driven AI continues to thrive. For some devotees, digital gods and chatbots offer something traditional structures often cannot- immediate, non-judgemental access to spiritual guidance at any time.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Harvard Health Publishing has partnered with Microsoft to use its health content to train the Copilot AI system. The collaboration seeks to enhance the accuracy of healthcare responses on Microsoft’s AI platform, according to the Wall Street Journal.
HHP publishes consumer health resources reviewed by Harvard scientists, covering topics such as sleep, nutrition, and pain management. The institution confirmed that Microsoft has paid to license its articles, expanding a previous agreement made in 2022.
The move is designed to make medically verified information more accessible to the public through Copilot, which now reaches over 33 million users.
Harvard’s Soroush Saghafian said the deal could help cut errors in AI-generated medical advice, a key concern in healthcare. He emphasised the importance of rigorous testing before deployment, warning that unverified tools could pose serious risks to users.
Harvard continues to invest in AI research and integration across its academic programmes. Recent initiatives include projects to address bias in medical training and studies exploring AI’s role in drug development and cancer treatment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Under the proposed controls, parents will be able to turn off chats between teens and AI characters altogether, though the broader Meta AI chatbot remains accessible. They can also block specific characters if they wish. Parents will receive topic summaries of what teens are discussing with AI characters and with Meta AI itself.
The first deployment will be on Instagram, with initial availability in English for the US, UK, Canada and Australia. Meta says it recognises the challenges parents face in guiding children through new technology, and wants these tools to simplify oversight.
Meta also notes that AI content and experiences intended for teens will follow a PG-13 standard: avoiding extreme violence, nudity and graphic drug content. Teens currently interact with only a limited set of AI characters under age-appropriate guidelines.
Additionally, Meta plans to allow time limits on AI character use by teens. The company is also detecting and discouraging attempts by users to falsify their age to bypass restrictions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Under TTPA, platforms will be required to clearly label political ads, disclose the sponsor, the election or social issue at hand, the amounts paid, and how the ads are targeted. These obligations also include strict conditions on targeting and require explicit consent for certain data use.
Meta called the requirements ‘significant operational challenges and legal uncertainties’ and labelled parts of the new rules ‘unworkable’ for advertisers and platforms. It said that personalised ads are widely used for issue-based campaigns and that limiting them might restrict how people access political or social issue-related information.
While political ads will be banned under paid formats, Meta says organic political content (e.g. users posting or sharing political views) will still be permitted.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A major internet outage hit some of the world’s biggest apps and sites from about 9 a.m. CET Monday, with issues traced to Amazon Web Services. Tracking sites reported widespread failures across the US and beyond, disrupting consumer and enterprise services.
AWS cited ‘significant error rates’ in DynamoDB requests in the US-EAST-1 region, impacting additional services in Northern Virginia. Engineers are mitigating while investigating root cause, and some customers couldn’t create or update Support Cases.
Outages clustered around Virginia’s dense data-centre corridor but rippled globally. Impacted brands included Amazon, Google, Snapchat, Roblox, Fortnite, Canva, Coinbase, Slack, Signal, Vodafone and the UK tax authority HMRC.
Coinbase told users ‘all funds are safe’ as platforms struggled to authenticate, fetch data and serve content tied to affected back-ends. Third-party monitors noted elevated failure rates across APIs and app logins.
The incident underscores heavy reliance on hyperscale infrastructure and the blast radius when core data services falter. Full restoration and a formal post-mortem are pending from AWS.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The EU’s Data Act is now in force, marking a major shift in European data governance. The regulation aims to expand access to industrial and Internet of Things data, giving users greater control over information they generate while maintaining safeguards for trade secrets and privacy.
Adopted as part of the EU’s Digital Strategy, the act seeks to promote fair competition, innovation, and public-sector efficiency. It enables individuals and businesses to share co-generated data from connected devices and allows public authorities limited access in emergencies or matters of public interest.
Some obligations take effect later. Requirements on product design for data access will apply to new connected devices from September 2026, while certain contract rules are deferred until 2027. Member states will set national penalties, with fines in some cases reaching up to 10% of global annual turnover.
The European Commission will assess the law’s impact within three years of its entry into force. Policymakers hope the act will foster a fairer, more competitive data economy, though much will depend on consistent enforcement and how businesses adapt their practices.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
India’s small towns are fast becoming global hubs for AI training and data labelling, as outsourcing firms move operations beyond major cities like Bangalore and Chennai. Lower costs and improved connectivity have driven a trend known as cloud farming, which has transformed rural employment.
In Tamil Nadu, workers annotate and train AI models for global clients, preparing data that helps machines recognise objects, text and speech. Firms like Desicrew pioneered this approach by offering digital careers close to home, reducing migration to cities while maintaining high technical standards.
Desicrew’s chief executive, Mannivannan J K, says about a third of the company’s projects already involve AI, a figure expected to reach nearly all within two years. Much of the work focuses on transcription, building multilingual datasets that teach machines to interpret diverse human voices and dialects.
Analysts argue that cloud farming could make rural India the world’s largest AI operations base, much as it once dominated IT outsourcing. Yet challenges remain around internet reliability, data security and client confidence.
For workers like Dhanalakshmi Vijay, who fine-tunes models by correcting their errors, the impact feels tangible. Her adjustments, she says, help AI systems perform better in real-world applications, improving everything from shopping recommendations to translation tools.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission and European Data Protection Board have jointly published long-awaited guidelines clarifying how the Digital Markets Act aligns with the GDPR. It aims to remove uncertainty for large online platforms over consent requirements, data sharing amongst other things.
Under the new interpretation, gatekeepers must obtain specific and separate consent when combining user data across different services, including when using it for AI training. They cannot rely on legitimate interest or contractual necessity for such processing, closing a loophole long debated in EU privacy law.
The Guidelines also set limits on how often consent can be re-requested, prohibiting repeated or slightly altered requests for the same purpose within a year. In addition, they make clear that offering users a binary choice between accepting tracking or paying a fee will rarely qualify as freely given consent.
The Guidance also introduces a practical standard for anonymisation, requiring platforms to prevent re-identification using technical and organisational safeguards. Consultation on the Guidelines runs until 4 December 2025, after which they are expected to shape future enforcement.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at Penn State have developed an AI model that measures children’s bite rate during meals, aiming to address a key risk factor for obesity. Eating quickly hinders fullness signals and, combined with larger bites, increases the risk of obesity.
The AI system, named ByteTrack, was trained using over 1,400 minutes of video from a study of 94 children aged seven to nine. It recognises children’s faces with 97% accuracy and detects bites about 70% as successfully as humans.
Although the system requires further refinement, the pilot study shows promise for large-scale research and potential real-world applications. With further training, ByteTrack could become a smartphone app alerting children when they eat too quickly to encourage healthier habits.
The research was funded by the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institute of General Medical Sciences, and Penn State’s computational and clinical research institutes.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Spotify partners with major labels on artist-first AI tools, putting consent and copyright at the centre of product design. The plan aims to align new features with transparent labelling and fair compensation while addressing concerns about generative music flooding platforms.
The collaboration with Sony, Universal, Warner, and Merlin will give artists control over participation in AI experiences and how their catalogues are used. Spotify says it will prioritise consent, clearer attribution, and rights management as it builds new tools.
Early direction points to expanded labelling via DDEX, stricter controls against mass AI uploads, and protections against search and recommendation manipulation. Spotify’s AI DJ and prompt-based playlists hint at how engagement features could evolve without sidelining creators.
Future products are expected to let artists opt in, monitor usage, and manage when their music feeds AI-generated works. Rights holders and distributors would gain better tracking and payment flows as transparency improves across the ecosystem.
Industry observers say the tie-up could set a benchmark for responsible AI in music if enforcement matches ambition. By moving in step with labels, Spotify is pitching a path where innovation and artist advocacy reinforce rather than undermine each other.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!