Medical chatbots spark powerful debate over serious health risks and benefits

Medical chatbots are rapidly becoming part of digital healthcare as technology companies expand AI tools into health services. Companies such as OpenAI and Anthropic are introducing chatbot features designed to answer medical questions using personal data.

Medical chatbots can analyse information from medical records, wearable devices and wellness applications. By incorporating details such as prescriptions, age and prior diagnoses, they aim to provide more personalised responses than a standard internet search.

However, companies stress that these tools are not substitutes for professional medical care. They are not intended to diagnose conditions but rather to summarise results, explain terminology and help users prepare for appointments.

Supporters argue that medical chatbots can improve patient understanding. Experts from the University of California, San Francisco, note that the tools may clarify complex reports and highlight essential health trends when used responsibly.

Despite these benefits, significant limitations remain. AI systems can hallucinate or generate inaccurate advice, and users may struggle to distinguish reliable guidance from subtle errors.

Independent research reinforces these concerns. A 2024 study by the University of Oxford found that participants who used chatbots for hypothetical health scenarios did not make better decisions than those who relied on online searches or personal judgement.

Performance was strong when analysing structured written cases. Yet effectiveness declined during real-world interactions, where communication gaps affected outcomes.

Privacy presents another major issue. Medical chatbots often require users to upload sensitive health information to deliver personalised responses.

Unlike doctors and hospitals, AI companies are not bound by HIPAA, the US federal health privacy law. Although platforms state that data is stored separately and not used to train models, privacy standards differ from those in traditional healthcare.

Experts from Stanford University advise users to understand these differences before sharing medical records. Transparency and informed consent are critical considerations.

Medical chatbots are also inappropriate in emergencies. Individuals experiencing symptoms such as chest pain, shortness of breath or severe headaches should seek immediate medical attention instead of consulting AI tools.

Even in non-urgent cases, specialists recommend maintaining healthy scepticism. Consulting multiple AI systems may provide a form of second opinion, but it does not replace professional medical advice.

Medical chatbots, therefore, represent both opportunity and risk. As their capabilities expand, users must carefully weigh convenience and personalisation against accuracy, oversight and data protection concerns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chrome Gemini vulnerability allowed camera and file access

A high-severity vulnerability in Chrome’s integrated Gemini AI assistant exposed users to the potential activation of the camera and microphone, local file access, and phishing attacks. The issue, tracked as CVE-2026-0628, was disclosed by Palo Alto Networks’ Unit 42 and patched by Google in January 2026.

Gemini Live operates as a privileged AI panel embedded within the browser, capable of web page summarisation and task automation. To enable multimodal functionality, the panel is granted elevated permissions, including access to screenshots, local files, and device hardware.

Researchers identified inconsistent handling of the declarativeNetRequest API when gemini.google.com was loaded inside the AI side panel rather than a standard browser tab. While extensions could inject JavaScript in both cases, the panel context inherited browser-level privileges.

A malicious extension exploiting this distinction could hijack the trusted panel and execute arbitrary code with elevated access. Potential impacts included silent activation of a camera or microphone, screenshot capture, local file exfiltration, and high-credibility phishing attacks.

Google released a fix on 5 January 2026 following responsible disclosure. Users running the latest version of Chrome are protected, and organisations are advised to ensure updates are applied across all endpoints.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New public guidance launched to promote responsible AI use in Thailand

Thailand has published a draft public guidance document to help citizens use AI safely and responsibly. The ‘AI Guide for Citizens’ outlines key AI concepts, benefits, limitations, and practical examples for users engaging with generative AI tools.

Data safety is a central focus, with officials warning against entering personal identifiers, financial data, confidential information, or government secrets into public AI platforms.

The guide also details technical risks such as AI’ hallucinations,’ prompt injection, and data poisoning, advising users to verify outputs and treat AI as a support tool rather than a decision maker.

The guidance addresses ethical and legal responsibilities, warning against using AI to generate misinformation, deepfakes, or harmful content. It emphasises fairness and bias, noting AI systems can inherit human prejudices from training data.

Citizens encountering AI-related scams or harmful content are advised to collect evidence, report incidents to cybercrime authorities, and contact Thailand’s personal data protection agency if privacy is compromised.

The draft aligns Thailand’s AI policies with national rules and international standards, including ISO governance principles and the EU AI Act. The initiative aims to boost AI literacy and safeguards as AI becomes more integrated into daily life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Europe pressed to slow digital age-verification push amid privacy fears

Hundreds of academics urged governments to halt plans for mandatory age checks on social media, rather than accelerating deployment without assessing the risks.

The warning arrives as several European states consider restrictions on children’s access to online platforms and as companies promote verification tools such as live selfies or uploads of government-issued IDs.

Researchers argue that current systems expose people to privacy breaches, security vulnerabilities and malicious sites that ignore verification rules instead of offering meaningful protection.

They say scientific consensus has not yet formed on the benefits or harms of age-assurance technologies, making large-scale implementation premature and potentially discriminatory.

The letter stresses that any credible system would require cryptographic safeguards for every query, protecting data in transit rather than leaving identity checks to platforms without robust technical guarantees.

Academics believe such infrastructure would be complex to build globally and would create friction that many providers may refuse to adopt.

Concern escalated after early deployments in Italy and France, where verification is already mandatory.

Signatories, including Ronald Rivest and Bart Preneel, warn that governments risk introducing a socially unacceptable system that increases exposure to data misuse instead of ensuring children’s safety online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

X rolls out Paid Partnership labels to boost creator transparency

The social media platform, X, has introduced a new ‘Paid Partnership’ label that creators can attach to posts to show when content is promotional instead of leaving audiences unsure about commercial intent.

An update that improves transparency for followers while meeting rules set by the Federal Trade Commission, which expects sponsored material to be disclosed clearly.

Creators previously relied on hashtags such as #ad or #paidpartnership instead of an integrated disclosure option. The new feature allows users to apply the label through a content-disclosure toggle either during posting or afterwards.

X’s product lead, Nikita Bier, said undisclosed promotions damage trust and weaken the platform’s integrity, so the tool is meant to support creators and regulators simultaneously.

X has been trying to build a stronger creator ecosystem by offering payouts, subscriptions and other incentives. Yet many creators still favour Instagram or YouTube over X as their primary channel, because those platforms have longer-standing monetisation tools.

The addition of a built-in label aligns X with broader industry practice and aims to regain credibility among advertisers and creators.

The company has also tightened API access, preventing programmatic replies unless a user is directly mentioned or quoted.

A change that seeks to limit LLM-generated spam instead of allowing automated responses to distort discussions or appear as fake engagement beneath sponsored content.

X hopes these combined measures will enhance authenticity around commercial posts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Non-human identities gain importance in cloud and AI security

As organisations expand across cloud environments, non-human identities are becoming a critical component of modern cybersecurity strategies. Managing machine identities and their associated secrets is increasingly central to reducing risk and improving AI-driven threat detection.

As digital infrastructure grows, machine identities function as secure access credentials for applications, services, and automated processes. Effective governance can reduce vulnerabilities, improve compliance, and streamline operations across sectors such as finance and healthcare.

Integrating non-human identities into AI security frameworks enables more contextual anomaly detection and improved visibility into network behaviour. Rather than relying solely on static scanning, organisations can adopt adaptive models that enhance predictive threat response.

Challenges remain, particularly around coordination between security, DevOps, and research teams. Gaps in collaboration and limited awareness of identity lifecycle management can create blind spots that weaken overall cyber resilience.

Automation is increasingly seen as essential for scaling non-human identity management. By automating secrets rotation, certificate renewal, and access reviews, organisations can strengthen governance while enabling security teams to focus on higher-value strategic priorities.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

A new bill aims to formalise crypto taxation in Turkey

Turkey’s ruling AK Party has introduced a bill in parliament to formalise cryptocurrency taxation and revise key tax and spending rules. The legislation links crypto taxation to Turkey’s Capital Markets Law and sets a clear framework for digital assets.

Under the proposal, regulated crypto platforms would withhold a 10% tax on gains quarterly, applicable to both individuals and companies, residents and non-residents. Transaction service providers are subject to a 0.03% tax, and investors on unlicensed platforms must declare gains annually.

The president would have the authority to adjust the withholding tax between 0% and 20%, depending on factors such as token type, holding period, issuer, or wallet type. Exemptions include VAT-free crypto deliveries and corporate tax changes for foundation university hospitals from 2027.

If approved, the crypto taxation provisions would take effect two months after publication, signalling Turkey’s first formal steps to regulate digital assets and integrate them into the national tax system.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU pressures Meta over alleged smart glasses privacy breaches

Lawmakers in the European Parliament are pressing the European Commission for clarity after reports that Meta’s smart glasses recorded people in intimate moments without their knowledge.

Concerns intensified when Swedish outlets reported that Ray-Ban AI glasses captured and uploaded sensitive footage in violation of strict consent requirements under the EU’s General Data Protection Regulation.

The reports indicate that personal data from EU users was sent to Sama, a third-party contractor, in Kenya for human review. Annotators working there said they viewed images of individuals changing clothes and believed the recordings were taken without consent.

They added that Meta’s attempts to blur faces or apply other safeguards failed often enough to expose identifiable material instead of ensuring proper anonymisation.

EU privacy law requires clear information and consent before collecting and processing personal data, and additional safeguards when exporting data to countries without recognised adequacy status.

Kenya is still negotiating such recognition with the Commission, meaning contractual protections would be necessary.

The Irish Data Protection Commission, responsible for Meta’s GDPR oversight, has been contacted amid questions about whether Meta complied with EU requirements.

Lawmakers also want the Commission to examine whether proposed changes in the Digital Omnibus package could dilute privacy protections rather than strengthen them.

Critics argue the reforms might ease data-use rules for AI training at a moment when allegations about Meta’s smart glasses have intensified scrutiny of the EU’s broader digital policy agenda.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Samsung settles Texas lawsuit over smart TV data collection

Samsung has settled a lawsuit with the Texas Attorney General over allegations that its smart TVs collected viewing data without users’ informed consent.

Texas Attorney General Ken Paxton filed the suit last December, accusing Samsung of using Automated Content Recognition (ACR) technology to capture screenshots of what consumers were watching and using that information for targeted advertising.

As part of the settlement, Samsung must halt any collection or processing of ACR viewing data without first obtaining the express consent of Texas consumers.

The company is also required to update its smart TVs with clear, conspicuous disclosure and consent screens, replacing what a court had previously identified as ‘dark patterns’ requiring over 200 clicks to access privacy settings.

Samsung stated that it does not believe its Viewing Information Services system violated any regulations, but agreed to strengthen its privacy disclosures. Paxton noted that other smart TV manufacturers, including Sony, LG, Hisense, and TCL Technologies, have not yet made similar changes in response to ongoing lawsuits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quantum-safe security upgrades SIM and eSIM cards

Thales has successfully demonstrated a world-first capability that prepares 5G networks for the era of quantum computing. The test proved that SIM and eSIM cards can be remotely upgraded to support post-quantum cryptography, boosting security without disrupting services or user experience.

The breakthrough highlights the potential of crypto-agile networks to evolve securely as quantum threats emerge.

Replacing millions of devices is impractical, so Thales enables operators to deploy quantum-safe algorithms directly to existing devices. Remote upgrades preserve data and connectivity while instantly boosting security, keeping 5G networks resilient and trusted.

The demonstration reinforces Thales’ leadership in post-quantum cryptography, with dedicated research teams developing quantum-resistant methods and contributing to international standards, including NIST initiatives.

Operators can now protect long-term investments, secure critical services, and prepare for the next generation of quantum computing without operational disruptions.

Thales’ approach offers a practical roadmap for telecoms to adopt quantum-safe security today, ensuring continuity, trust, and resilience across mobile networks as digital threats evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot