Google’s health-related search results increasingly draw on YouTube rather than hospitals, government agencies, or academic institutions, as new research reveals how AI Overviews select citation sources in automated results.
An analysis by SEO platform SE Ranking reviewed more than 50,000 German-language health queries and found AI Overviews appeared on over 82% of searches, making healthcare one of the most AI-influenced information categories on Google.
Across all cited sources, YouTube ranked first by a wide margin, accounting for more than 20,000 references and surpassing medical publishers, hospital websites, and public health authorities.
Academic journals and research institutions accounted for less than 1% of citations, while national and international government health bodies accounted for under 0.5%, highlighting a sharp imbalance in source authority.
Researchers warn that when platform-scale content outweighs evidence-based medical sources, the risk extends beyond misinformation to long-term erosion of trust in AI-powered search systems.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Gmail experienced widespread email filtering issues on Saturday, sending spam into primary inboxes and mislabelling legitimate messages as suspicious, according to Google’s Workspace status dashboard.
Problems began around 5 a.m. Pacific time, with users reporting disrupted inbox categories, unexpected spam warnings and delays in email delivery. Many said promotional and social emails appeared in primary folders, while trusted senders were flagged as potential threats.
Google acknowledged the malfunction throughout the day, noting ongoing efforts to restore normal service as complaints spread across social media platforms.
By Saturday evening, the company confirmed the issue had been fully resolved for all users, although some misclassified messages and spam warnings may remain visible for emails received before the fix.
Google said it is conducting an internal investigation and will publish a detailed incident analysis to explain what caused the disruption.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Meta Platforms has announced a temporary pause on teenagers’ access to AI characters across its platforms, including Instagram and WhatsApp. Meta disclosed the decision to review and rebuild the feature for younger users.
In San Francisco, Meta said the restriction will apply to users identified as minors based on declared ages or internal age-prediction systems. Teenagers will still be able to use Meta’s core AI assistant, though interactive AI characters will be unavailable.
The move comes ahead of a major child safety trial in Los Angeles involving Meta, TikTok and YouTube. The Los Angeles case focuses on allegations that social media platforms cause harm to children through addictive and unsafe digital features.
Concerns about AI chatbots and minors have grown across the US, prompting similar action by other companies. In Los Angeles and San Francisco, regulators and courts are increasingly scrutinising how AI interactions affect young users.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Australia’s social media ban for under-16s is worrying social media companies. According to the country’s eSafety Commissioner, these companies fear a global trend of banning such apps. In Australia, regulators say major platforms reluctantly resisted the policy, fearing that similar rules could spread internationally.
In Australia, the ban has already led to the closure of 4.7 million child-linked accounts across platforms, including Instagram, TikTok and Snapchat. Authorities argue the measures are necessary to protect children from harmful algorithms and addictive design.
Social media companies operating in Australia, including Meta, say stronger safeguards are needed but oppose a blanket ban. Critics have warned about privacy risks, while regulators insist early data shows limited migration to alternative platforms.
Australia is now working with partners such as the UK to push tougher global standards on online child safety. In Australia, fines of up to A$49.5m may be imposed on companies failing to enforce the rules effectively.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Nike has launched an internal investigation following claims by the WorldLeaks cybercrime group that company data was stolen from its systems.
The sportswear giant said it is assessing a potential cybersecurity incident after the group listed Nike on its Tor leak site and published a large volume of files allegedly taken during the intrusion.
WorldLeaks claims to have released approximately 1.4 terabytes of data, comprising more than 188,000 files. The group is known for data theft and extortion tactics, pressuring organisations to pay by threatening public disclosure instead of encrypting systems with ransomware.
The cybercrime operation emerged in 2025 after rebranding from Hunters International, a ransomware gang active since 2023. Increased law enforcement pressure reportedly led the group to abandon encryption-based attacks and focus exclusively on stealing sensitive corporate data.
An incident that adds to growing concerns across the retail and apparel sector, following a recent breach affecting Under Armour that exposed tens of millions of customer records.
Nike has stated that consumer privacy and data protection remain priorities while the investigation continues.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI has dominated discussions at the World Economic Forum in Davos, where IMF managing director Kristalina Georgieva warned that labour markets are already undergoing rapid structural disruption.
According to Georgieva, demand for skills is shifting unevenly, with productivity gains benefiting some workers while younger people and first-time job seekers face shrinking opportunities.
Entry-level roles are particularly exposed as AI systems absorb routine and clerical tasks traditionally used to gain workplace experience.
Georgieva described the effect on young workers as comparable to a labour-market tsunami, arguing that reduced access to foundational roles risks long-term scarring for an entire generation entering employment.
IMF research suggests AI could affect roughly 60 percent of jobs in advanced economies and 40 percent globally, with only about half of exposed workers likely to benefit.
For others, automation may lead to lower wages, slower hiring and intensified pressure on middle-income roles lacking AI-driven productivity gains.
At Davos 2026, Georgieva warned that the rapid, unregulated deployment of AI in advanced economies risks outpacing public policy responses.
Without clear guardrails and inclusive labour strategies, she argued that technological acceleration could deepen inequality rather than supporting broad-based economic resilience.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Apple has accused the European Commission of preventing it from implementing App Store changes designed to comply with the Digital Markets Act, following a €500 million fine for breaching the regulation.
The company claims it submitted a formal compliance plan in October and has yet to receive a response from EU officials.
In a statement, Apple argued that the Commission requested delays while gathering market feedback, a process the company says lasted several months and lacked a clear legal basis.
The US tech giant described the enforcement approach as politically motivated and excessively burdensome, accusing the EU of unfairly targeting an American firm.
The Commission has rejected those claims, saying discussions with Apple remain ongoing and emphasising that any compliance measures must support genuinely viable alternative app stores.
Officials pointed to the emergence of multiple competing marketplaces after the DMA entered into force as evidence of market demand.
Scrutiny has increased following the decision by SetApp mobile to shut down its iOS app store in February, with the developer citing complex and evolving business terms.
Questions remain over whether Apple’s proposed shift towards commission-based fees and expanded developer communication rights will satisfy EU regulators.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers and free-speech advocates are warning that coordinated swarms of AI agents could soon be deployed to manipulate public opinion at a scale capable of undermining democratic systems.
According to a consortium of academics from leading universities, advances in generative and agentic AI now enable large numbers of human-like bots to infiltrate online communities and autonomously simulate organic political discourse.
Unlike earlier forms of automated misinformation, AI swarms are designed to adapt to social dynamics, learn community norms and exchange information in pursuit of a shared objective.
By mimicking human behaviour and spreading tailored narratives gradually, such systems could fabricate consensus, amplify doubt around electoral processes and normalise anti-democratic outcomes without triggering immediate detection.
Evidence of early influence operations has already emerged in recent elections across Asia, where AI-driven accounts have engaged users with large volumes of unverifiable information rather than overt propaganda.
Researchers warn that information overload, strategic neutrality and algorithmic amplification may prove more effective than traditional disinformation campaigns.
The authors argue that democratic resilience now depends on global coordination, combining technical safeguards such as watermarking and detection tools with stronger governance of political AI use.
Without collective action, they caution that AI-enabled manipulation risks outpacing existing regulatory and institutional defences.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Korea has moved towards regulatory action against Grok, the generative AI chatbot developed by xAI, following allegations that the system was used to generate and distribute sexually exploitative deepfake images.
The country’s Personal Information Protection Commission has launched a preliminary fact-finding review to assess whether violations occurred and whether the matter falls within its legal remit.
The review follows international reports accusing Grok of facilitating the creation of explicit and non-consensual images of real individuals, including minors.
Under the Personal Information Protection Act of South Korea, generating or altering sexual images of identifiable people without consent may constitute unlawful handling of personal data, exposing providers to enforcement action.
Concerns have intensified after civil society groups estimated that millions of explicit images were produced through Grok over a short period, with thousands involving children.
Several governments, including those in the US, Europe and Canada, have opened inquiries, while parts of Southeast Asia have opted to block access to the service altogether.
In response, xAI has introduced technical restrictions preventing users from generating or editing images of real people. Korean regulators have also demanded stronger youth protection measures from X, warning that failure to address criminal content involving minors could result in administrative penalties.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
French President Emmanuel Macron has called for an accelerated legislative process to introduce a nationwide ban on social media for children under 15 by September.
Speaking in a televised address, Macron said the proposal would move rapidly through parliament so that explicit rules are in place before the new school year begins.
Macron framed the initiative as a matter of child protection and digital sovereignty, arguing that foreign platforms or algorithmic incentives should not shape young people’s cognitive and emotional development.
He linked excessive social media use to manipulation, commercial exploitation and growing psychological harm among teenagers.
Data from France’s health watchdog show that almost half of teenagers spend between two and five hours a day on their smartphones, with the vast majority accessing social networks daily.
Regulators have associated such patterns with reduced self-esteem and increased exposure to content linked to self-harm, drug use and suicide, prompting legal action by families against major platforms.
The proposal from France follows similar debates in the UK and Australia, where age-based access restrictions have already been introduced.
The French government argues that decisive national action is necessary instead of waiting for a slower Europe-wide consensus, although Macron has reiterated support for a broader EU approach.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!