ChatGPT reaches 40 million daily users for health advice

More than 40 million people worldwide now use ChatGPT daily for health-related advice, according to OpenAI.

Over 5 percent of all messages sent to the chatbot relate to healthcare, with three in five US adults reporting use in the past three months. Many interactions occur outside clinic hours, highlighting the demand for AI guidance in navigating complex medical systems.

Users primarily turn to AI to check symptoms, understand medical terms, and explore treatment options.

OpenAI emphasises that ChatGPT helps patients gain agency over their health, particularly in rural areas where hospitals and specialised services are scarce.

The technology also supports healthcare professionals by reducing administrative burdens and providing timely information.

Despite growing adoption, regulatory oversight remains limited. Some US states have attempted to regulate AI in healthcare, and lawsuits have emerged over cases where AI-generated advice has caused harm.

OpenAI argues that ChatGPT supplements rather than replaces medical services, helping patients interpret information, prepare for care, and navigate gaps in access.

Healthcare workers are also increasingly using AI. Surveys show that two in five US professionals, including nurses and pharmacists, use generative AI weekly to draft notes, summarise research, and streamline workflows.

OpenAI plans to release healthcare policy recommendations to guide the responsible adoption of AI in clinical settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplochatbot!

AI apprenticeships and short courses advance online learning

Diplo Academy strengthened its online education offer in 2025, placing AI and practical learning at the centre of training for diplomats, policy makers and public officials facing growing digital pressures.

A notable change was the transition from traditional eight-week online courses to more focused four-week formats. The ongoing shift aims to better match the schedules of working professionals without compromising academic rigour, applied learning or peer exchange.

Artificial intelligence featured prominently across the curriculum, particularly through the AI Apprenticeship. Three editions of the course were delivered in 2025, including one designed for Geneva-based international organisations.

The AI Apprenticeship emphasises informed, knowledgeable, effective and responsible use of AI in everyday professional contexts. Inspired by the Swiss apprenticeship tradition, it combines hands-on practice, mentorship and critical thinking, enabling participants to apply AI tools while retaining human judgement and accountability.

Diplo Academy 2025: Online courses in numbers

Musk says users are liable for the illegal Grok content

Scrutiny has intensified around X after its Grok chatbot was found generating non-consensual explicit images when prompted by users.

Grok had been positioned as a creative AI assistant, yet regulators reacted swiftly once altered photos were linked to content involving minors. Governments and rights groups renewed pressure on platforms to prevent abusive use of generative AI.

India’s Ministry of Electronics and IT issued a notice to X demanding an Action Taken Report within 72 hours, citing failure to restrict unlawful content.

Authorities in France referred similar cases to prosecutors and urged enforcement under the EU’s Digital Services Act, signalling growing international resolve to control AI misuse.

Elon Musk responded by stating users, instead of Grok, would be legally responsible for illegal material generated through prompts. The company said offenders would face permanent bans and cooperation with law enforcement.

Critics argue that transferring liability to users does not remove the platform’s duty to embed stronger safeguards.

Independent reports suggest Grok has previously been involved in deepfake creation, creating a wider debate about accountability in the AI sector. The outcome could shape expectations worldwide regarding how platforms design and police powerful AI tools.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit overtakes TikTok in the UK social media race

In the UK, Reddit has quietly overtaken TikTok to become Britain’s fourth most-visited social media platform, marking a major shift in how people search for information and share opinions online.

Use of the platform among UK internet users has risen sharply over the past two years, driven strongly by younger audiences who are increasingly drawn to open discussion instead of polished influencer content.

Google’s algorithm changes have helped accelerate Reddit’s rise by prioritising forum-based conversations in search results. Partnership deals with major AI companies have reinforced visibility further, as AI tools increasingly cite Reddit threads.

Younger users in the UK appear to value unfiltered and experience-based conversations, creating strong growth across lifestyle, beauty, parenting and relationship communities, alongside major expansion in football-related discussion.

Women now make up more than half of Reddit’s UK audience, signalling a major demographic shift for a platform once associated mainly with male users. Government departments, including ministers, are also using Reddit for direct engagement through public Q&A sessions.

Tension remains part of the platform’s culture, yet company leaders argue that community moderation and voting systems help manage behaviour.

Reddit is now encouraging users to visit directly instead of arriving via search or AI summaries, positioning the platform as a human alternative to automated answers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Concerns raised over Google AI Overviews and health advice

A Guardian investigation has found that Google’s AI Overviews have displayed false and misleading health information that could put people at risk of harm. The summaries, which appear at the top of search results, are generated using AI and are presented as reliable snapshots of key information.

The investigation identified multiple cases where Google’s AI summaries provided inaccurate medical advice. Examples included incorrect guidance for pancreatic cancer patients, misleading explanations of liver blood test results, and false information about women’s cancer screening.

Health experts warned that such errors could lead people to dismiss symptoms, delay treatment, or follow harmful advice. Some charities said the summaries lacked essential context and could mislead users during moments of anxiety or crisis.

Concerns were also raised about inconsistencies, with the same health queries producing different AI-generated answers at different times. Experts said this variability undermines trust and increases the risk that misinformation will influence health decisions.

Google said most AI Overviews are accurate and helpful, and that the company continually improves quality, particularly for health-related topics. It said action is taken when summaries misinterpret content or lack appropriate context.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Universities in Ireland urged to rethink assessments amid AI concerns

Face-to-face interviews and oral verification could become a routine part of third-level assessments under new recommendations aimed at addressing the improper use of AI. Institutions are being encouraged to redesign assessment methods to ensure student work is authentic.

The proposals are set out in new guidelines published by the Higher Education Authority (HEA) of Ireland, which regulates universities and other third-level institutions. The report argues that assessment systems must evolve to reflect the growing use of generative AI in education.

While encouraging institutions to embrace AI’s potential, the report stresses the need to ensure students are demonstrating genuine learning. Academics have raised concerns that AI-generated assignments are increasingly difficult to distinguish from original student work.

To address this, the report recommends redesigning assessments to prioritise student authorship and human judgement. Suggested measures include oral verification, process-based learning, and, where appropriate, a renewed reliance on written exams conducted without technology.

The authors also caution against relying on AI detection tools, arguing that integrity processes should be based on dialogue and evidence. They call for clearer policies, staff and student training, and safeguards around data use and equitable access to AI tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New οffline AI note app promises privacy without subscriptions

Growing concern over data privacy and subscription fatigue has led an independent developer to create WitNote, an AI note-taking tool that runs entirely offline.

The software allows users to process notes locally on Windows and macOS rather than relying on cloud-based services where personal information may be exposed.

WitNote supports lightweight language models such as Qwen2.5-0.5B that can run with limited storage requirements. Users may also connect to external models through API keys if preferred.

Core functions include rewriting, summarising and extending content, while a WYSIWYG Markdown editor provides a familiar workflow without network delays, instead of relying on web-based interfaces.

Another key feature is direct integration with Obsidian Markdown files, allowing notes to be imported instantly and managed in one place.

The developer says the project remains a work in progress but commits to ongoing updates and user-driven improvements, even joining Apple’s developer programme personally to support smoother installation.

For users seeking AI assistance while protecting privacy and avoiding monthly fees, WitNote positions itself as an appealing offline alternative that keeps full control of data on the local machine.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Belgium’s influencers seek clarity through a new certification scheme

The booming influencer economy of Belgium is colliding with an advertising rulebook that many creators say belongs to another era.

Different obligations across federal, regional and local authorities mean that wording acceptable in one region may trigger a reprimand in another. Some influencers have even faced large fines for administrative breaches such as failing to publish business details on their profiles.

In response, the Influencer Marketing Alliance in Belgium has launched a certification scheme designed to help creators navigate the legal maze instead of risking unintentional violations.

Influencers complete an online course on advertising and consumer law and must pass a final exam before being listed in a public registry monitored by the Jury for Ethical Practices.

Major brands, including L’Oréal and Coca-Cola, already prefer to collaborate with certified creators to ensure compliance and credibility.

Not everyone is convinced.

Some Belgian influencers argue that certification adds more bureaucracy at a time when they already struggle to understand overlapping rules. Others see value as a structured reminder that content creators remain legally responsible for commercial communication shared with followers.

The alliance is also pushing lawmakers to involve influencers more closely when drafting future rules, including taxation and safeguards for child creators.

Consumer groups such as BEUC support clearer definitions and obligations under the forthcoming EU Digital Fairness Act, arguing that influencer advertising should follow the same standards as other media instead of remaining in a grey zone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Best AI dictation tools for faster speech-to-text in 2026

AI dictation reached maturity during the years after many attempts of patchy performance and frustrating inaccuracies.

Advances in speech-to-text engines and large language models now allow modern dictation tools to recognise everyday speech more reliably while keeping enough context to format sentences automatically instead of producing raw transcripts that require heavy editing.

Several leading apps have emerged with different strengths. Wispr Flow focuses on flexibility with style options and custom vocabulary, while Willow blends automation with privacy by storing transcripts locally.

Monologue also prioritises privacy by allowing users to download the model and run transcription entirely on their own machines. Superwhisper caters for power users by supporting multiple downloadable models and transcription from audio or video files.

Other tools take different approaches. VoiceTypr offers an offline-first design with lifetime licensing, Aqua promotes speed and phrase-based shortcuts, Handy provides a simple free open source starting point, and Typeless gives one of the most generous free allowances while promising strong data protection.

Each reflects a wider trend where developers try to balance convenience, privacy, control and affordability.

Users now benefit from cleaner, more natural-sounding transcripts instead of the rigid audio typing tools of previous years. AI dictation has become faster, more accurate and far more usable for everyday note-taking, messaging and work tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Best AI chatbot for maths accuracy revealed in new benchmark

AI tools are increasingly used for simple everyday calculations, yet a new benchmark suggests accuracy remains unreliable.

The ORCA study tested five major chatbots across 500 real-world maths prompts and found that users still face roughly a 40 percent chance of receiving the wrong answer.

Gemini from Google recorded the highest score at 63 percent, with xAI’s Grok almost level at 62.8 percent. DeepSeek followed with 52 percent, while ChatGPT scored 49.4 percent, and Claude placed last at 45.2 percent.

Performance varied sharply across subjects, with maths and conversion tasks producing the best results, but physics questions dragged scores down to an average accuracy below 40 percent.

Researchers identified most errors as sloppy calculations or rounding mistakes, rather than deeper failures to understand the problem. Finance and economics questions highlighted the widest gaps between the models, while DeepSeek struggled most in biology and chemistry, with barely one correct answer in ten.

Users are advised to double-check results whenever accuracy is crucial. A calculator or a verified source is still advised instead of relying entirely on an AI chatbot for numerical certainty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!