India’s AI roadmap could add $500 billion to economy by 2035

According to the Business Software Alliance, India could add over $500 billion to its economy by 2035 through the widespread adoption of AI.

At the BSA AI Pre-Summit Forum in Delhi, the group unveiled its ‘Enterprise AI Adoption Agenda for India’, which aligns with the goals of the India–AI Impact Summit 2026 and the government’s vision for a digitally advanced economy by 2047.

The agenda outlines a comprehensive policy framework across three main areas: talent and workforce, infrastructure and data, and governance.

It recommends expanding AI training through national academies, fostering industry–government partnerships, and establishing innovation hubs with global companies to strengthen talent pipelines.

BSA also urged greater government use of AI tools, reforms to data laws, and the adoption of open industry standards for content authentication. It called for coordinated governance measures to ensure responsible AI use, particularly under the Digital Personal Data Protection Act.

BSA has introduced similar policy roadmaps in other major markets, apart from India, including the US, Japan, and ASEAN countries, as part of its global effort to promote trusted and inclusive AI adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Snap brings Perplexity’s answer engine into Chat for nearly a billion users

Starting in early 2026, Perplexity’s AI will be integrated into Snapchat’s Chat, accessible to nearly 1 billion users. Snapchatters can ask questions and receive concise, cited answers in-app. Snap says the move reinforces its position as a trusted, mobile-first AI platform.

Under the deal, Perplexity will pay Snap $400 million in cash and equity over a one-year period, tied to the global rollout. Revenue contribution is expected to begin in 2026. Snap points to its 943 million MAUs and reaches over 75% of 13–34-year-olds in 25+ countries.

Perplexity frames the move as meeting curiosity where it occurs, within everyday conversations. Evan Spiegel says Snap aims to make AI more personal, social, and fun, woven into friendships and conversations. Both firms pitch the partnership as enhancing discovery and learning on Snapchat.

Perplexity joins, rather than replaces, Snapchat’s existing My AI. Messages sent to Perplexity will inform personalisation on Snapchat, similar to My AI’s current behaviour. Snap claims the approach is privacy-safe and designed to provide credible, real-time answers from verifiable sources.

Snap casts this as a first step toward a broader AI partner platform inside Snapchat. The companies plan creative, trusted ways for leading AI providers to reach Snap’s global community. The integration aims to enable seamless, in-chat exploration while keeping users within Snapchat’s product experience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How GEMS turns Copilot time savings into personalised teaching at scale

GEMS Education is rolling out Microsoft 365 Copilot to cut admin and personalise learning, with clear guardrails and transparency. Teachers spend less time on preparation and more time with pupils. The aim is augmentation, not replacement.

Copilot serves as a single workspace for plans, sources, and visuals. Differentiated materials arrive faster for struggling and advanced learners. More time goes to feedback and small groups.

Student projects are accelerating. A Grade 8 pupil built a smart-helmet prototype, using AI to guide circuitry, code, and documentation. The idea to build functionally moved quickly.

The School of Research and Innovation opened in August 2025 as a living lab, hosting educator training, research partners, and student incubation. A Microsoft-backed stack underpins the campus.

Teachers are co-creating lightweight AI agents for curriculum and analytics. Expert oversight and safety patterns stay central. The focus is on measurable time savings and real-world learning.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI tool helps identify suicide-risk individuals

Researchers at Touro University have found that an AI tool can identify suicide risk that standard diagnostic methods often miss. The study, published in the Journal of Personality Assessment, shows that LLMs can analyse speech to detect patterns linked to perceived suicide risk.

Current assessment methods, such as multiple-choice questionnaires, often fail to capture the nuances of an individual’s experience.

The study used Claude 3.5 Sonnet to analyse 164 participants’ audio responses, examining future self-continuity, a key factor linked to suicide risk. The AI detected subtle cues in speech, including coherence, emotional tone, and detail, which traditional tools overlooked.

While the research focused on perceived risk rather than actual suicide attempts, identifying individuals who feel at risk is crucial for timely intervention. LLM predictions could be used in hospitals, hotlines, or therapy sessions as a new tool for mental health professionals.

Beyond suicide risk, large language models may also help detect other mental health conditions such as depression and anxiety, providing faster, more nuanced insights into patients’ mental well-being and supporting early intervention strategies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Material-level AI emerges in MIT–DeRucci sleep science collaboration

MIT’s Sensor and Ambient Intelligence group, led by Joseph Paradiso, unveiled ‘FiberCircuits’, a smart-fibre platform co-developed with DeRucci. It embeds sensing, edge inference, and feedback directly in fibres to create ‘weavable intelligence’. The aim is natural, low-intrusion human–computer interaction.

Teams embedded AI micro-sensors and sub-millimetre ICs to capture respiration, movement, skin conductance, and temperature, running tinyML locally for privacy. Feedback via light, sound, or micro-stimulation closes the loop while keeping power and data exposure low.

Sleep science prototypes included a mattress with distributed sensors for posture recognition, an eye mask combining PPG and EMG, and an IMU-enabled pillow. Prototypes were used to validate signal parsing and human–machine coupling across various sleep scenarios.

Edge-first design places most inference on the fibre to protect user data and reduce interference, according to DeRucci’s CTO, Chen Wenze. Collaboration covered architecture, algorithms, and validation, with early results highlighting comfort, durability, and responsiveness suitable for bedding.

Partners plan to expand cohorts and scenarios into rehabilitation and non-invasive monitoring, and to release selected algorithms and test protocols. Paradiso framed material-level intelligence as a path to gentler interfaces that blend into everyday environments.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI brain atlas reveals unprecedented detail in MRI scans

Researchers at University College London have developed NextBrain, an AI-assisted brain atlas that visualises the human brain in unprecedented detail. The tool links microscopic tissue imaging with MRI, enabling rapid and precise analysis of living brain scans.

NextBrain maps 333 brain regions using high-resolution post-mortem tissue data, which is combined into a digital 3D model with the aid of AI. The atlas was created over the course of six years by dissecting, photographing, and digitally reconstructing five human brains.

AI played a crucial role in aligning microscope images with MRI scans, ensuring accuracy while significantly reducing the time required for manual labelling. The atlas detects subtle changes in brain sub-regions, such as the hippocampus, crucial for studying diseases like Alzheimer’s.

Testing on thousands of MRI scans demonstrated that NextBrain reliably identifies brain regions across different scanners and imaging conditions, enabling detailed analysis of ageing patterns and early signs of neurodegeneration.

All data, tools, and annotations are openly available through the FreeSurfer neuroimaging platform. The public release of NextBrain aims to accelerate research, support diagnosis, and improve treatment for neurological conditions worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tinder tests AI feature that analyses photos for better matches

Tinder is introducing an AI feature called Chemistry, designed to better understand users through interactive questions and optional access to their Camera Roll. The system analyses personal photos and responses to infer hobbies and preferences, offering more compatible match suggestions.

The feature is being tested in New Zealand and Australia ahead of a broader rollout as part of Tinder’s 2026 product revamp. Match Group CEO Spencer Rascoff said Chemistry will become a central pillar in the app’s evolving AI-driven experience.

Privacy concerns have surfaced as the feature requests permission to scan private photos, similar to Meta’s recent approach to AI-based photo analysis. Critics argue that such expanded access offers limited benefits to users compared to potential privacy risks.

Match Group expects a short-term financial impact, projecting a $14 million revenue decline due to Tinder’s testing phase. The company continues to face user losses despite integrating AI tools for safer messaging, better profile curation and more interactive dating experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU Advocate General backs limited seizure of work emails in competition probes

An Advocate General of the Court of Justice of the European Union has said national competition authorities may lawfully seize employee emails during investigations without prior judicial approval. The opinion applies only when a strict legal framework and effective safeguards against abuse are in place.

The case arose after Portuguese medical companies challenged the competition authority’s seizure of staff emails, arguing it breached the right to privacy and correspondence under the EU Charter of Fundamental Rights. The authority acted under authorisation from the Public Prosecutor’s Office.

According to the Advocate General, such seizures may limit privacy and data protection rights under Articles 7 and 8 of the Charter, but remain lawful if proportionate and justified. The processing of personal data is permitted under the GDPR where it serves the public interest in enforcing competition law.

The opinion emphasised that access to business emails did not undermine the essence of data protection rights, as the investigation focused on professional communications. The final judgment from the CJEU is expected to clarify how privacy principles apply in competition law enforcement across the EU.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ByteDance cuts use of Claude after Anthropic blocks China access

An escalating tech clash has emerged between ByteDance and Anthropic over AI access and service restrictions. ByteDance has halted use of Anthropic’s Claude model on its infrastructure after the US firm imposed access limitations for Chinese users.

The suspension follows Anthropic’s move to restrict China-linked deployments and aligns with broader geopolitical tensions in the AI sector. ByteDance reportedly said it would now rely on domestic alternatives, signalling a strategic pivot away from western-based AI models.

Industry watchers view the dispute as a marker of how major tech firms are navigating export controls, national security concerns and sovereignty in AI. Observers warn the rift may prompt accelerated investment in home-grown AI ecosystems by Chinese companies.

While neither company has detailed all operational impacts, the episode highlights AI’s fraught position at the intersection of technology and geopolitics. US market reaction may hinge on whether other firms follow suit or partnerships are redefined around regional access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Social media platforms ordered to enforce minimum age rules in Australia

Australia’s eSafety Commissioner has formally notified major social media platforms, including Facebook, Instagram, TikTok, Snapchat, and YouTube, that they must comply with new minimum age restrictions from 10 December.

The rule will require these services to prevent social media users under 16 from creating accounts.

eSafety determined that nine popular services currently meet the definition of age-restricted platforms since their main purpose is to enable online social interaction. Platforms that fail to take reasonable steps to block underage users may face enforcement measures, including fines of up to 49.5 million dollars.

The agency clarified that the list of age-restricted platforms will not remain static, as new services will be reviewed and reassessed over time. Others, such as Discord, Google Classroom, and WhatsApp, are excluded for now as they do not meet the same criteria.

Commissioner Julie Inman Grant said the new framework aims to delay children’s exposure to social media and limit harmful design features such as infinite scroll and opaque algorithms.

She emphasised that age limits are only part of a broader effort to build safer, more age-appropriate online environments supported by education, prevention, and digital resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!