Deepfake scams target Indian global executives

A deepfake video of Bombay Stock Exchange chief executive Sundararaman Ramamurthy circulated on social media in India, falsely offering stock advice to investors. The exchange moved quickly to report and remove the content, warning the public not to trust fake investment clips.

Cybersecurity experts say such cases are rising sharply, with one US firm estimating a 3,000 percent increase in deepfake incidents over two years. Executives in the US and the UK have also been impersonated using AI-generated audio and video.

In Hong Kong, police said a UK engineering firm lost $25m after an employee joined a video call featuring deepfake versions of senior colleagues. The transfer was made to multiple accounts before the fraud was discovered.

Security companies in the US and the UK are developing detection tools that analyse facial movement and blood flow patterns to identify AI-generated footage. Analysts warn that as costs fall and tools improve, businesses in India, Hong Kong and beyond face an escalating arms race against digital fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Free plan users can now transfer data to Claude

Anthropic has enhanced its Claude AI chatbot to make switching from other platforms easier. Users on the free plan can now activate Claude’s memory feature, which allows them to import data from other AI platforms using a new dedicated tool.

The update ensures that users don’t have to start over when transferring context and history from competitors like OpenAI’s ChatGPT or Google’s Gemini.

The memory import option, first introduced in October for paid subscribers, now appears under ‘settings’ → ‘capabilities’ for all users. The tool lets users copy a prompt from their previous AI and paste the output into Claude, seamlessly transferring past interactions.

The recent popularity of Claude has been driven by tools such as Claude Code and Claude Cowork, as well as the launch of the Opus 4.6 and Sonnet 4.6 models. Upgrades enhance Claude’s coding, spreadsheet, and complex task capabilities, boosting its appeal to new users.

Anthropic’s visibility has also increased amid debates with the Pentagon, as the company refuses to loosen AI safeguards for military use, drawing ‘red lines’ around mass surveillance and autonomous weapons.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Medical chatbots spark powerful debate over serious health risks and benefits

Medical chatbots are rapidly becoming part of digital healthcare as technology companies expand AI tools into health services. Companies such as OpenAI and Anthropic are introducing chatbot features designed to answer medical questions using personal data.

Medical chatbots can analyse information from medical records, wearable devices and wellness applications. By incorporating details such as prescriptions, age and prior diagnoses, they aim to provide more personalised responses than a standard internet search.

However, companies stress that these tools are not substitutes for professional medical care. They are not intended to diagnose conditions but rather to summarise results, explain terminology and help users prepare for appointments.

Supporters argue that medical chatbots can improve patient understanding. Experts from the University of California, San Francisco, note that the tools may clarify complex reports and highlight essential health trends when used responsibly.

Despite these benefits, significant limitations remain. AI systems can hallucinate or generate inaccurate advice, and users may struggle to distinguish reliable guidance from subtle errors.

Independent research reinforces these concerns. A 2024 study by the University of Oxford found that participants who used chatbots for hypothetical health scenarios did not make better decisions than those who relied on online searches or personal judgement.

Performance was strong when analysing structured written cases. Yet effectiveness declined during real-world interactions, where communication gaps affected outcomes.

Privacy presents another major issue. Medical chatbots often require users to upload sensitive health information to deliver personalised responses.

Unlike doctors and hospitals, AI companies are not bound by HIPAA, the US federal health privacy law. Although platforms state that data is stored separately and not used to train models, privacy standards differ from those in traditional healthcare.

Experts from Stanford University advise users to understand these differences before sharing medical records. Transparency and informed consent are critical considerations.

Medical chatbots are also inappropriate in emergencies. Individuals experiencing symptoms such as chest pain, shortness of breath or severe headaches should seek immediate medical attention instead of consulting AI tools.

Even in non-urgent cases, specialists recommend maintaining healthy scepticism. Consulting multiple AI systems may provide a form of second opinion, but it does not replace professional medical advice.

Medical chatbots, therefore, represent both opportunity and risk. As their capabilities expand, users must carefully weigh convenience and personalisation against accuracy, oversight and data protection concerns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Vietnam AI Law establishes comprehensive risk based governance framework

Vietnam’s Law on Artificial Intelligence has entered into force, establishing the first dedicated AI legal framework in Southeast Asia. The law centralises oversight and replaces earlier AI provisions in the 2025 Law on Digital Technology Industry.

The framework closely mirrors the AI Act adopted by the European Union. It promotes accountability, transparency, and safety in response to risks such as misinformation, copyright infringement, and deepfakes.

At the same time, Vietnam places a stronger emphasis on digital sovereignty and domestic AI capacity. While remaining open to international integration, the law prioritises national strategic interests.

The legislation introduces a tiered risk classification system. AI systems considered to pose unacceptable risks, including threats to national security or human dignity, are banned, while low-risk applications such as spam filters face lighter obligations.

The Vietnam Ministry of Science and Technology will lead implementation. A national AI database will support monitoring and registration, and a dedicated AI development fund will invest in data centres and research capacity as part of Vietnam’s broader technology strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Cybersecurity stability framework unlocks advanced Non Human Identity management

AI is increasingly positioned as a key driver of cybersecurity stability. By analysing large volumes of data and detecting anomalies in real time, AI helps organisations strengthen defence systems and respond faster to evolving digital threats.

Modern cybersecurity challenges are closely linked to the rise of Non-Human Identities (NHIs), including machine accounts, tokens, and automated credentials. These identities require continuous monitoring and secure lifecycle management to prevent unauthorised access and data breaches.

The integration of AI with NHI management enables a more proactive security approach. AI improves visibility into access permissions and system behaviour, helping organisations reduce risks and maintain stronger control over their digital environments.

Automation powered by AI enhances operational efficiency across cybersecurity processes. Tasks such as credential rotation, access monitoring, and policy enforcement can be automated, allowing security teams to prioritise strategic decision-making.

AI also strengthens threat intelligence capabilities by identifying patterns and predicting potential attacks before they occur. This predictive capacity helps close security gaps, particularly between development, operations, and security teams.

Across sectors such as finance, healthcare, and technology, AI-driven cybersecurity solutions support compliance and data protection requirements. These systems contribute to building resilient infrastructures capable of adapting to increasingly sophisticated cyber threats.

Finally, combining AI capabilities with structured identity management creates a foundation for long-term cybersecurity resilience. Organisations adopting this approach can improve incident response, enhance adaptability, and secure future digital operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chrome Gemini vulnerability allowed camera and file access

A high-severity vulnerability in Chrome’s integrated Gemini AI assistant exposed users to the potential activation of the camera and microphone, local file access, and phishing attacks. The issue, tracked as CVE-2026-0628, was disclosed by Palo Alto Networks’ Unit 42 and patched by Google in January 2026.

Gemini Live operates as a privileged AI panel embedded within the browser, capable of web page summarisation and task automation. To enable multimodal functionality, the panel is granted elevated permissions, including access to screenshots, local files, and device hardware.

Researchers identified inconsistent handling of the declarativeNetRequest API when gemini.google.com was loaded inside the AI side panel rather than a standard browser tab. While extensions could inject JavaScript in both cases, the panel context inherited browser-level privileges.

A malicious extension exploiting this distinction could hijack the trusted panel and execute arbitrary code with elevated access. Potential impacts included silent activation of a camera or microphone, screenshot capture, local file exfiltration, and high-credibility phishing attacks.

Google released a fix on 5 January 2026 following responsible disclosure. Users running the latest version of Chrome are protected, and organisations are advised to ensure updates are applied across all endpoints.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan embraces AI amid cultural unease and labour pressures

When Hayao Miyazaki dismissed early AI-generated animation as ‘an insult to life itself’ in 2016, the technology felt distant from mainstream creative work. Less than a decade later, generative AI tools produce images and text in seconds, reviving debate over authorship, copyright, and artistic identity.

In Japan, debate reflects both anxiety and ambition. Illustrators question the use of their work in training data, while policymakers and corporations see AI as vital to easing a projected labour shortfall by 2040. Legal provisions allowing data use for analysis have intensified calls for safeguards.

Public sentiment in Japan remains broadly favourable toward AI adoption. Surveys indicate relatively high levels of trust, with many viewing AI as part of long-term structural adjustment rather than an immediate threat. Economic expectations often outweigh concerns about disruption.

Workplace implementation, however, remains limited. OECD research shows only a small share of employees actively use AI tools, citing skills shortages and cautious corporate culture. Analysts describe a paradox: AI could ease labour pressures, yet adoption is constrained by limited expertise.

Creative professionals report more immediate effects. Surveys highlight income pressures and uncertainty among illustrators and freelancers. As deployment expands, Japan faces the task of balancing economic necessity with cultural preservation and fair access to emerging technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

SharePoint strengthens Microsoft 365 Copilot with enterprise knowledge

Twenty-five years after its launch, SharePoint has grown into one of Microsoft’s largest collaboration platforms, serving more than one billion users annually. The service now underpins vast volumes of enterprise content, with billions of files and millions of sites created each day.

Microsoft positions the platform as a foundational knowledge layer for Microsoft 365 Copilot. As the primary grounding source for Copilot, it contributes to the Work IQ intelligence layer, enabling AI tools to operate within an organisational context.

New agentic capabilities allow teams to build solutions using natural language prompts within governed Microsoft 365 environments. Custom AI skills package organisational standards, terminology, and business logic, helping ensure outputs align with internal policies and workflows.

AI-driven publishing features are now embedded across its web authoring tools. Organisations can plan, refine, and distribute content at scale while maintaining governance controls and consistent communication standards.

Content stored in SharePoint also powers semantic indexing and retrieval systems that support contextual discovery across Microsoft 365 applications. Microsoft says these capabilities enable more proactive knowledge surfacing and strengthen Copilot’s ability to deliver grounded responses.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New public guidance launched to promote responsible AI use in Thailand

Thailand has published a draft public guidance document to help citizens use AI safely and responsibly. The ‘AI Guide for Citizens’ outlines key AI concepts, benefits, limitations, and practical examples for users engaging with generative AI tools.

Data safety is a central focus, with officials warning against entering personal identifiers, financial data, confidential information, or government secrets into public AI platforms.

The guide also details technical risks such as AI’ hallucinations,’ prompt injection, and data poisoning, advising users to verify outputs and treat AI as a support tool rather than a decision maker.

The guidance addresses ethical and legal responsibilities, warning against using AI to generate misinformation, deepfakes, or harmful content. It emphasises fairness and bias, noting AI systems can inherit human prejudices from training data.

Citizens encountering AI-related scams or harmful content are advised to collect evidence, report incidents to cybercrime authorities, and contact Thailand’s personal data protection agency if privacy is compromised.

The draft aligns Thailand’s AI policies with national rules and international standards, including ISO governance principles and the EU AI Act. The initiative aims to boost AI literacy and safeguards as AI becomes more integrated into daily life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

X rolls out Paid Partnership labels to boost creator transparency

The social media platform, X, has introduced a new ‘Paid Partnership’ label that creators can attach to posts to show when content is promotional instead of leaving audiences unsure about commercial intent.

An update that improves transparency for followers while meeting rules set by the Federal Trade Commission, which expects sponsored material to be disclosed clearly.

Creators previously relied on hashtags such as #ad or #paidpartnership instead of an integrated disclosure option. The new feature allows users to apply the label through a content-disclosure toggle either during posting or afterwards.

X’s product lead, Nikita Bier, said undisclosed promotions damage trust and weaken the platform’s integrity, so the tool is meant to support creators and regulators simultaneously.

X has been trying to build a stronger creator ecosystem by offering payouts, subscriptions and other incentives. Yet many creators still favour Instagram or YouTube over X as their primary channel, because those platforms have longer-standing monetisation tools.

The addition of a built-in label aligns X with broader industry practice and aims to regain credibility among advertisers and creators.

The company has also tightened API access, preventing programmatic replies unless a user is directly mentioned or quoted.

A change that seeks to limit LLM-generated spam instead of allowing automated responses to distort discussions or appear as fake engagement beneath sponsored content.

X hopes these combined measures will enhance authenticity around commercial posts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!