Amazon launches Health AI to assist with medical queries

Amazon has launched a new AI-powered assistant, Health AI, on its website and mobile app. The tool is designed to answer health questions, explain medical records, manage prescriptions, and connect users with healthcare providers.

Health AI can also book appointments and guide users based on their health information if they grant access to their records. The feature is currently limited to the US, with a wider rollout planned in the coming weeks.

The assistant is linked with One Medical, Amazon’s healthcare service, allowing users to communicate with licensed professionals through messages, video consultations, or in-person visits. It can also send prescription renewal requests and suggest relevant health products.

Users can create an Amazon Health Profile and enable two-step authentication to start using Health AI. By allowing the AI to access their medical records, including medications, lab results, and diagnoses, users can receive more personalised responses.

Amazon emphasises that Health AI is a support tool rather than a replacement for doctors. It helps users understand health information and prepare for discussions with healthcare providers, but it does not provide independent diagnoses or treatment.

As part of an introductory offer, eligible US Prime members can receive up to five free message consultations with One Medical providers. The system runs on Amazon Bedrock and uses multiple AI agents to manage tasks, monitor interactions, and escalate to human professionals when necessary.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New York moves to ban chatbots from giving legal and medical advice

New York lawmakers are considering legislation that would ban AI chatbots from providing legal or medical advice. The bill aims to stop automated systems from impersonating licensed professionals such as doctors and lawyers.

The proposal would also require chatbot operators to clearly inform users that they are interacting with an AI system. Notices must be prominent, written in the same language as the chatbot, and use a readable font.

A key feature of the bill is a private right of action. However, this would allow users to file civil lawsuits against chatbot owners who violate the law, recovering damages and legal fees. Experts say this enforcement tool strengthens the rules and deters abuse.

Supporters of the legislation argue it protects New Yorkers’ safety, particularly minors. Other bills in the same package would regulate online platforms like Roblox and set standards for generative AI, synthetic content, and the handling of biometric data.

The bill’s author, state Senator Kristen Gonzalez, said AI innovation should not come at the expense of public safety. She pointed to recent cases where AI chatbots were linked to harmful outcomes for minors, highlighting the need for transparency and accountability.

If passed, the law would take effect 90 days after the governor signs it. Lawmakers hope it will balance innovation with user protection, ensuring AI tools are used responsibly and safely across the state.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

GitHub malware campaign uses SEO tricks to steal browser data

Cybersecurity researchers have uncovered a malware campaign spreading through over 100 GitHub repositories disguised as free software tools. Hackers used SEO-heavy descriptions to make their fake repositories appear high in search results, close to legitimate software.

Users searching for popular programs were directed to counterfeit download pages. These pages offered ZIP files containing BoryptGrab, a malware designed to steal data from infected Windows systems. The files were disguised as cracked software, gaming cheats, or utility tools.

The malware collects sensitive information, including browser passwords, cookies, and cryptocurrency wallet details. It can access nine major browsers, including Chrome, Edge, Firefox, Opera, Brave, and Vivaldi, and bypass some security protections.

Certain variants also install additional tools allowing remote access and persistent control over infected machines. However, this enables hackers to run commands, maintain ongoing access, and steal more information without the user’s knowledge.

Trend Micro, the cybersecurity firm that reported the campaign, noted some code and logs suggest a possible Russian origin, though attribution is not confirmed. Experts warn that GitHub and search engine manipulation make this attack method especially dangerous.

Users are advised to download software only from trusted sources and to verify the authenticity of the repository. Organisations should follow security best practices such as software allowlisting, maintaining inventory, and removing unauthorised applications to prevent similar attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Smart Classrooms initiative transforms learning in 10 Thai pilot schools

Ten pilot schools in Buriram and Si Sa Ket provinces have launched Smart Classrooms under the UNESCO–Huawei TEOSA initiative, supporting Thailand’s drive to expand digital education.

Led by UNESCO Bangkok in partnership with Thailand’s Ministry of Education and Huawei Technologies Co., Ltd, the Smart Classrooms initiative aims to strengthen digital learning environments, equip teachers with digital and AI competencies, and support policy development for AI in education. The programme also supports Thailand’s ‘Transforming Education in the Digital Era’ policy and the National AI Strategy and Action Plan (2022–2027).

Each province has one designated ‘mother school’ that serves as a regional digital hub, supporting four surrounding ‘child schools’ by sharing resources, training, and expertise. The ten pilot schools in total have received high-speed internet, interactive digital displays, and collaborative learning platforms that support real-time content sharing and blended learning. Forty-five teachers from the pilot schools also participated in hands-on demonstrations of Smart Classrooms systems on 4–5 March.

‘This new technology will help translate theory into practice, allowing students to experiment, test strategies, and see results immediately,’ said Pathanapong Momprakhon, Principal of Paisan Pittayakom School. UNESCO Bangkok’s Deputy Director and Chief of Education, Marina Patrier, highlighted the importance of combining infrastructure with teacher capacity-building.

‘At UNESCO, we are committed to promoting the ethical and inclusive use of AI in ways that empower teachers and expand opportunities for every learner,’ Ms Patrier said at the launch. ‘While Smart Classrooms provide important tools, it is teachers’ creativity, professional judgement and leadership that ultimately bring these innovations to life.’

Chitralada Chanyaem of the Thai National Commission for UNESCO highlighted the importance of collaboration in advancing digital education.

‘The UNESCO–Huawei Funds-in-Trust Project on Technology-Enabled Open Schools for All stands as a powerful example of collaboration dedicated to transforming education into a system that is open, inclusive, flexible, and resilient in the face of a rapidly changing world, she said. ‘As the future of education cannot be confined within classroom walls, it must bridge sectors and communities, working collaboratively to create equitable and sustainable opportunities for all.’

Teachers observed Huawei technical staff and master teachers demonstrate how digital tools and AI-supported applications can be used in everyday lessons. Ms Piyaporn Kidsirianan, Public Relations Manager at Huawei Technologies (Thailand) Co., Ltd, said the initiative aims to reduce digital inequality.

‘The Open Schools for All initiative represents a commitment to using technology as a bridge to deliver quality education to remote and underserved communities.’ The TEOSA Smart Classrooms initiative combines policy support, digital infrastructure upgrades, and teacher training to help translate Thailand’s digital education ambitions into practical impact at the school level.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Concerns grow over Grok AI content on X platform

Social media platform X has launched an investigation into racist and offensive posts generated by its Grok AI chatbot in the UK. The review follows a Sky News analysis that flagged troubling responses produced publicly by the system.

Analysis by the broadcaster found Grok generating highly offensive replies, including profanities targeting certain religions. Some responses also repeated false claims blaming Liverpool supporters for the 1989 Hillsborough disaster.

Sky News reporter Rob Harris said X safety teams were urgently examining the chatbot’s behaviour after the posts spread online. The company and its AI developer xAI did not immediately respond to requests for comment.

Concerns around Grok come as governments and regulators increasingly scrutinise AI-generated content on social platforms. Authorities in several countries have already raised alarms about sexually explicit or harmful material created by chatbots.

Earlier this year, xAI introduced new restrictions to limit some image editing features in Grok. Users in certain jurisdictions were also blocked from generating images of people in revealing clothing where such content is illegal.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI biotech firm pushes limits of human lifespan

Longevity research is gaining momentum as AI transforms the way scientists search for new medicines. Insilico Medicine, founded by Alex Zhavoronkov in 2014, combines machine learning and automation to study ageing and accelerate drug discovery.

Company research focuses on identifying biological targets linked to ageing and developing molecules to treat related diseases. Several experimental treatments have already received Investigational New Drug clearance, allowing them to move towards human clinical trials.

Insilico also became the first AI-driven biotech company to list on the Hong Kong Stock Exchange, raising HK$2.28 billion in its public offering. Zhavoronkov said careful financial planning was essential because enthusiasm around AI could still form a market bubble.

Expansion plans now include deeper partnerships across China and the Middle East. A new collaboration in the UAE aims to build regional AI drug discovery programmes and diversify economies beyond oil.

Beyond medicines, Zhavoronkov envisions integrated biotech ecosystems where living spaces, healthcare and research operate together. Such hubs allow scientists and citizens to contribute health data that helps develop future treatments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Pentagon AI dispute raises concerns for startups

A dispute between Anthropic and the Pentagon in the US has raised questions about whether startups will hesitate to pursue defence contracts. Negotiations over the use of Anthropic’s Claude AI technology collapsed, prompting the US administration to label the company a supply chain risk.

The situation in the US escalated as OpenAI secured its own agreement with the Pentagon. The development sparked backlash online, with reports of a surge in ChatGPT uninstalls after the defence partnership announcement.

Technology analysts in the US say the controversy highlights the unusual scrutiny facing high-profile AI firms. Companies such as OpenAI and Anthropic attract intense public attention because widely used AI products place their defence partnerships in the spotlight.

Startup founders in the US are now debating the risks of government contracts, particularly with the Pentagon. Industry observers in the US warn that defence authorities’ contract changes could make government collaboration more uncertain.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China strengthens online safeguards for minors

Chinese authorities have introduced new rules to classify online content that could affect the health and well-being of minors. Set to take effect on 1 March, the measures aim to adapt to a rapidly evolving internet landscape.

Top government bodies, including those in cyberspace, education, publishing, film, culture, tourism, public security, and radio and television, jointly released the initiative. Together, they outlined four categories of content that could negatively impact minors and specified their key characteristics.

Recent issues, such as the misuse of minors’ images, have been integrated into the regulatory framework. Authorities also established preventive guidelines to manage risks from emerging technologies, including algorithmic recommendations and generative AI.

Internet platforms and content producers are now required to take both proactive and corrective measures against harmful content. The rules emphasise that platforms must monitor, block, or remove information that could affect minors’ well-being.

The Cyberspace Administration of China pledged to continue purifying the online environment. Authorities will urge platforms to assume their primary responsibilities and strengthen governance of content affecting young users, aiming to create a safer and healthier digital space for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Sovereign AI becomes a strategic question for governments

Governments across the world are increasingly treating AI as a strategic capability that shapes economic development, public services and national security. Momentum behind the idea of ‘sovereign AI’ is growing as countries reassess who controls the chips, cloud infrastructure, data and models powering modern technology.

Complete control over the entire AI stack remains unrealistic for most economies because of the enormous financial and technological costs involved. Global infrastructure continues to rely heavily on US technology firms, which still operate a large share of data centres and AI systems worldwide.

Policy makers are therefore exploring different approaches to sovereignty across the AI ecosystem rather than pursuing total independence. Strategies range from building domestic computing capacity to adapting global AI models for national languages, regulations and public services.

Several countries already illustrate different approaches. The EU is investing billions in AI infrastructure, Canada protects sensitive computing resources while using global models, and India prioritises applications that serve its multilingual population through public digital systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Online privacy faces new pressures in the age of social media

Online privacy is eroding as digital services collect ever-growing personal data and surveillance becomes part of daily technology use. The debate has intensified as social media platforms, advertisers, and connected devices expand their ability to track behaviour, preferences, and habits.

Analysts say younger generations have adapted to this reality rather than resisting it. ‘In 2026, online privacy is a luxury, not a right,’ says Thomas Bunting, an analyst at the UK innovation think tank Nesta. He argues many people have grown up accepting data collection as a trade-off for access to online services, noting: ‘We’ve been taught how to deal with it.’

Advocates warn that the erosion of online privacy could have wider social consequences. Cybersecurity expert Prof Alan Woodward from the University of Surrey says the issue goes beyond personal privacy. ‘People should care about online privacy because it shapes who has power over their lives,’ he says, arguing that privacy is ‘about having something to protect: freedom of thought, experimentation, dissent and personal development without permanent surveillance.’

Despite a growing number of privacy tools and regulations, data exposure remains widespread. According to Statista, more than 1.35 billion people were affected by data breaches, hacks, or exposure in 2024 alone. At the same time, more than 160 countries now have privacy legislation, while users regularly encounter cookie consent prompts that govern how their data is collected online.

Experts say frustration with privacy controls reflects a broader ‘privacy paradox’, in which people express concern about data protection but rarely change their behaviour. Cisco’s Consumer Privacy Survey found that while 89% of respondents said they care about privacy, only 38% actively take steps to protect their data.

As philosopher Carissa Véliz notes, the challenge is not simply awareness but a sense of agency: ‘Mostly, people don’t feel like they have control.’ She argues that protecting privacy requires stronger regulation, responsible technology design, and cultural change, adding: ‘It’s about having [access to] the right tech, but also using it.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!