AI tools reshape legal research and court efficiency in India

AI is rapidly reshaping India’s legal sector, as law firms and research platforms deploy conversational tools to address mounting caseloads and administrative strain.

SCC Online has launched an AI-powered legal research assistant that enables lawyers to ask complex questions in plain language, replacing rigid keyword-based searches and significantly reducing research time.

The need for speed and accuracy is pressing. India’s courts face a backlog exceeding 46 million cases, driven by procedural delays, documentation gaps, and limited judicial capacity.

Legal professionals routinely lose hours navigating precedents, limiting time for strategy, analysis, and client engagement.

Law firms are responding by embedding AI into everyday workflows. At Trilegal, AI supports drafting, document management, analytics, and collaboration, enabling lawyers to prioritise judgment and case strategy.

Secure AI platforms process high-volume legal material in minutes, improving productivity while preserving confidentiality and accuracy.

Beyond private practice, AI adoption is reshaping court operations and public access to justice. Real-time transcription, multilingual translation, and automated document analysis are shortening timelines and improving comprehension.

Incremental efficiency gains are beginning to translate into faster proceedings and broader legal accessibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Generative AI fuels surge in online fraud risks in 2026

Online scams are expected to surge in 2026, overtaking ransomware as the top cyber-risk, the World Economic Forum warned, driven by the growing use of generative AI.

Executives are increasingly concerned about AI-driven scams that are easier to launch and harder to detect than traditional cybercrime. WEF managing director Jeremy Jurgens said leaders now face the challenge of acting collectively to protect trust and stability in an AI-driven digital environment.

Consumers are also feeling the impact. An Experian report found 68% of people now see identity theft as their main concern, while US Federal Trade Commission data shows consumer fraud losses reached $12.5 billion in 2024, up 25% year on year.

Generative AI is enabling more convincing phishing, voice cloning, and impersonation attempts. The WEF reported that 62% of executives experienced phishing attacks, 37% encountered invoice fraud, and 32% reported identity theft, with vulnerable groups increasingly targeted through synthetic content abuse.

Experts warn that many organisations still lack the skills and resources to defend against evolving threats. Consumer groups advise slowing down, questioning urgent messages, avoiding unsolicited requests for information, and verifying contacts independently to reduce the risk of generative AI-powered scams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Siri to receive major AI upgrade with powerful enhancements

Apple is reportedly preparing a major overhaul of Siri by replacing the current system with an AI chatbot powered by Google’s Gemini technology. The change could mark the most significant upgrade to the assistant since its original launch.

Internal reports suggest the project aims to make Siri more conversational, capable of handling complex requests and sustained dialogue, rather than simple commands.

Future versions of iOS, iPadOS, and macOS are expected to introduce the new system. Users would still activate Siri with familiar voice commands or device buttons, regardless of the underlying technology.

Improved understanding of personal data could allow the assistant to manage calendars, photos, files, and settings more intuitively. Content creation features such as email drafting and note summarisation are also expected.

Growing competition from AI chatbots like ChatGPT and Gemini has increased pressure on Apple to modernise its digital assistant. Reports suggest a formal reveal could take place at a future developer event, followed by a broader rollout with upcoming iPhone releases.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI expands healthcare access in Africa

Health care in Africa is set to benefit from AI through a new initiative by the Gates Foundation and OpenAI. Horizon1000 aims to expand AI-powered support across 1,000 primary care clinics in Rwanda by 2028.

Severe shortages of health workers in Sub-Saharan Africa have limited access to quality care, with the region facing a shortfall of nearly six million professionals. AI tools will assist doctors and nurses by handling administrative tasks and providing clinical guidance.

Rwanda has launched an AI Health Intelligence Centre to utilise limited resources better and improve patient outcomes. The initiative will deploy AI in communities and homes, ensuring support reaches beyond clinic walls.

Experts believe AI represents a major medical breakthrough, comparable to vaccines and antibiotics. By helping health workers focus on patient care, the technology could reduce preventable deaths and transform health systems across low- and middle-income countries.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Kashi Vishwanath Temple launches AI chatbot

Shri Kashi Vishwanath Temple in India has launched an AI-powered chatbot to help devotees access services from anywhere in the world. The tool provides quick information on rituals, bookings, and temple timings.

Devotees can now book darshan, special aartis, and order prasad online. The chatbot also guides pilgrims on guesthouse availability and directions around Varanasi.

Supporting Hindi, English, and regional languages, the AI ensures smooth communication for global visitors. The initiative aims to simplify temple visits, especially during festivals and crowded periods.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Davos roundtable calls for responsible AI growth

Leaders from the tech industry, academia, and policy circles met at a TIME100 roundtable in Davos, Switzerland, on 21 January to discuss how to pursue rapid AI progress without sacrificing safety and accountability. The conversation, hosted by TIME CEO Jessica Sibley, focused on how AI should be built, governed, and used as it becomes more embedded in everyday life.

A major theme was the impact of AI-enabled technology on children. Jonathan Haidt, an NYU Stern professor and author of The Anxious Generation, argued that the key issue is not total avoidance but the timing and habits of exposure. He suggested children do not need smartphones until at least high school, emphasising that delaying access can help protect brain development and executive function.

Yoshua Bengio, a professor at the Université de Montréal and founder of LawZero, said responsible innovation depends on a deeper scientific understanding of AI risks and stronger safeguards built into systems from the start. He pointed to two routes, consumer and societal demand for ‘built-in’ protections, and government involvement that could include indirect regulation through liability frameworks, such as requiring insurance for AI developers and deployers.

Participants also challenged the idea that geopolitical competition should justify weaker guardrails. Bengio argued that even rivals share incentives to prevent harmful outcomes, such as AI being used for cyberattacks or the development of biological weapons, and said coordination between major powers is possible, drawing a comparison to Cold War-era cooperation on nuclear risk reduction.

The roundtable linked AI risks to lessons from social media, particularly around attention-driven business models. Bill Ready, CEO of Pinterest, said engagement optimisation can amplify divisions and ‘prey’ on negative human impulses, and described Pinterest’s shift away from maximising view time toward maximising user outcomes, even if it hurts short-term metrics.

Several speakers argued that today’s alignment approach is too reactive. Stanford computer scientist Yejin Choi warned that models trained on the full internet absorb harmful patterns and then require patchwork fixes, urging exploration of systems that learn moral reasoning and human values more directly from the outset.

Kay Firth-Butterfield, CEO of Good Tech Advisory, added that wider AI literacy, shaped by input from workers, parents, and other everyday users, should underpin future certification and trust in AI tools.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Anthropic releases new constitution shaping Claude’s AI behaviour

Anthropic has published a new constitution for its AI model Claude, outlining the values, priorities, and behavioural principles designed to guide its development. Released under a Creative Commons licence, the document aims to boost transparency while shaping Claude’s learning and reasoning.

The constitution plays a central role in training, guiding how Claude balances safety, ethics, compliance, and helpfulness. Rather than rigid rules, the framework explains core principles, enabling AI systems to generalise and apply nuanced judgment.

Anthropic says this approach supports more responsible decision-making while improving adaptability.

The updated framework also enables Claude to refine its own training through synthetic data generation and self-evaluation. Using the constitution in training helps future Claude models align behaviour with human values while maintaining safety and oversight.

Anthropic described the constitution as a living document that will evolve alongside AI capabilities. External feedback and ongoing evaluation will guide updates to strengthen alignment, transparency, and responsible AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU urged to accelerate AI deployment under new Apply AI strategy

European policymakers are calling for urgent action to accelerate AI deployment across the EU, particularly among SMEs and scale-ups, as the bloc seeks to strengthen its position in the global AI race.

Backing the European Commission’s Apply AI Strategy, the European Economic and Social Committee said Europe must prioritise trust, reliability, and human-centric design as its core competitive advantages.

The Committee warned that slow implementation, fragmented national approaches, and limited private investment are hampering progress. While the strategy promotes an ‘AI first’ mindset, policymakers stressed the need to balance innovation with strong safeguards for rights and freedoms.

Calls were also made for simpler access to funding, lighter administrative requirements, and stronger regional AI ecosystems. Investment in skills, inclusive governance, and strategic procurement were identified as key pillars for scaling trustworthy AI and strengthening Europe’s digital sovereignty.

Support for frontier AI development was highlighted as essential for reducing reliance on foreign models. Officials argued that building advanced, sovereign AI systems aligned with European values could enable competitive growth across sectors such as healthcare, finance, and industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

From chips to jobs: Huang’s vision for AI at Davos 2026

AI is evolving into a foundational economic system rather than a standalone technology, according to NVIDIA chief executive Jensen Huang, who described AI as a five-layer infrastructure spanning energy, hardware, data centres, models and applications.

Speaking at the World Economic Forum in Davos, Huang argued that building and operating each layer is triggering what he called the most significant infrastructure expansion in human history, with job creation stretching from power generation and construction to cloud operations and software development.

Investment patterns suggest a structural shift instead of a speculative cycle. Venture capital funding in 2025 reached record levels, largely flowing into AI-native firms across healthcare, manufacturing, robotics and financial services.

Huang stressed that the application layer will deliver the most significant economic return as AI moves from experimentation to core operational use across industries.

Concerns around job displacement were framed as misplaced. AI automates tasks rather than replacing professional judgement, enabling workers to focus on higher-value activities.

In healthcare, productivity gains from AI-assisted diagnostics and documentation are already increasing demand for radiologists and nurses rather than reducing headcount, as improved efficiency enables institutions to treat more patients.

Huang positioned AI as critical national infrastructure, urging governments to develop domestic capabilities aligned with local language, culture and industrial strengths.

He described AI literacy as an essential skill, comparable to leadership or management, while arguing that accessible AI tools could narrow global technology divides rather than widen them.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea sets the global standard for frontier AI regulation

South Korea will begin enforcing its Artificial Intelligence Act on Thursday, becoming the first country to introduce formal safety requirements for high-performance, or frontier, AI systems, reshaping the global regulatory landscape.

The law establishes a national AI governance framework, led by the Presidential Council on National Artificial Intelligence Strategy, and creates an AI Safety Institute to oversee safety and trust assessments.

Alongside regulatory measures, the government is rolling out broad support for research, data infrastructure, talent development, startups, and overseas expansion, signalling a growth-oriented policy stance.

To minimise early disruption, authorities will introduce a minimum one-year grace period centred on guidance, consultation, and education rather than enforcement.

Obligations cover three areas: high-impact AI in critical sectors, safety rules for frontier models, and transparency requirements for generative AI, including disclosure of realistic synthetic content.

Enforcement remains light-touch, prioritising corrective orders over penalties, with fines capped at 30 million won for persistent noncompliance. Officials said the framework aims to build public trust while supporting innovation, serving as a foundation for ongoing policy development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot