EU–US data privacy certification strengthens StackAdapt compliance

StackAdapt has secured EU–US Data Privacy Framework certification, strengthening GDPR compliance and enabling cross-border data transfers between the EU and the US.

The certification allows the advertising technology firm to manage personal data without relying on additional transfer mechanisms.

The framework, adopted in 2023, provides a legal basis for EU-to-US data flows while strengthening oversight and accountability. Certification requires organisations to meet strict standards on data minimisation, security, transparency, and individual rights.

By joining the framework, StackAdapt enhances its ability to support advertisers, publishers, and partners through seamless international data processing.

The move also reduces regulatory complexity for European customers while reinforcing the company’s broader commitment to privacy-by-design and responsible data use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Council presidency launches talks on AI deepfakes and cyberattacks

EU member states are preparing to open formal discussions on the risks posed by AI-powered deepfakes and their use in cyberattacks, following an initiative by the current Council presidency.

The talks are intended to assess how synthetic media may undermine democratic processes and public trust across the bloc.

According to sources, capitals will also begin coordinated exchanges on the proposed Democracy Shield, a framework aimed at strengthening resilience against foreign interference and digitally enabled manipulation.

Deepfakes are increasingly viewed as a cross-cutting threat, combining disinformation, cyber operations and influence campaigns.

The timeline set out by the presidency foresees structured discussions among national experts before escalating the issue to the ministerial level. The approach reflects growing concern that existing cyber and media rules are insufficient to address rapidly advancing AI-generated content.

An initiative that signals a broader shift within the Council towards treating deepfakes not only as a content moderation challenge, but as a security risk with implications for elections, governance and institutional stability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google faces new UK rules over AI summaries and publisher rights

The UK competition watchdog has proposed new rules that would force Google to give publishers greater control over how their content is used in search and AI tools.

The Competition and Markets Authority (CMA) plans to require opt-outs for AI-generated summaries and model training, marking the first major intervention under Britain’s new digital markets regime.

Publishers argue that generative AI threatens traffic and revenue by answering queries directly instead of sending users to the original sources.

The CMA proposal would also require clearer attribution of publisher content in AI results and stronger transparency around search rankings, including AI Overviews and conversational search features.

Additional measures under consultation include search engine choice screens on Android and Chrome, alongside stricter data portability obligations. The regulator says tailored obligations would give businesses and users more choice while supporting innovation in digital markets.

Google has warned that overly rigid controls could damage the user experience, describing the relationship between AI and search as complex.

The consultation runs until late February, with the outcome expected to shape how AI-powered search operates in the UK.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI project seeks major leap in diabetes care

A major research initiative led by the University of Virginia has secured $4.7 million to advance machine learning in Type 1 diabetes care.

The project, backed by Breakthrough T1D and the Helmsley Charitable Trust, aims to develop fully automated insulin systems that adapt continuously to patient needs.

The research will combine adaptive algorithms with ultra-rapid insulin to enable personalised glucose control without manual input. The University of Virginia will lead engineering and algorithm development, with clinical trials conducted across multiple US research centres.

At its core is an AI framework that learns from real-time data, adapting to metabolic changes, stress, and daily rhythms. Researchers aim to overcome the limitations of current automated insulin systems, which still rely on fixed parameters and regular user intervention.

The collaboration reflects a shift towards patient-centred AI, aiming to reduce daily diabetes management burdens while improving safety and quality of life. Developers say the technology could offer families greater freedom and long-term stability in managing chronic conditions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Canada’s Cyber Centre flags rising ransomware risks for 2025 to 2027

The national cyber authority of Canada has warned that ransomware will remain one of the country’s most serious cyber threats through 2027, as attacks become faster, cheaper and harder to detect.

The Canadian Centre for Cyber Security, part of Communications Security Establishment Canada, says ransomware now operates as a highly interconnected criminal ecosystem driven by financial motives and opportunistic targeting.

According to the outlook, threat actors are increasingly using AI and cryptocurrency while expanding extortion techniques beyond simple data encryption.

Businesses, public institutions and critical infrastructure in Canada remain at risk, with attackers continuously adapting their tactics, techniques and procedures to maximise financial returns.

The Cyber Centre stresses that basic cyber hygiene still provides strong protection. Regular software updates, multi-factor authentication and vigilance against phishing attempts significantly reduce exposure, even as attack methods evolve.

A report that also highlights the importance of cooperation between government bodies, law enforcement, private organisations and the public.

Officials conclude that while ransomware threats will intensify over the next two years, early warnings, shared intelligence and preventive measures can limit damage.

Canada’s cyber authorities say continued investment in partnerships and guidance remains central to building national digital resilience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

TikTok struggles to stabilise US infrastructure after data centre outage

TikTok says recovery of its US infrastructure is progressing, although technical issues continue to affect parts of the platform after a data centre power outage.

The disruption followed the launch of a new US-based entity backed by American investors, a move aimed at avoiding a nationwide ban.

Users across the country reported problems with searches, video playback, posting content, loading comments and unexpected behaviour in the For You algorithm. TikTok said the outage also affected other apps and warned that slower load times and timeouts may persist, rather than returning to normal performance.

In a statement posted by the TikTok USDS Joint Venture, the company said collaboration with its US data centre partner has restored much of the infrastructure, but posting new content may still trigger errors.

Creators may also see missing views, likes, or earnings due to server timeouts rather than actual data loss.

TikTok has not named the data centre partner involved, while severe winter storms across the US may have contributed to the outage. Despite growing scepticism around the timing of the disruption, the company insists that user data and engagement remain secure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI prepares ad rollout inside free ChatGPT service

Advertising is set to be introduced within the free ChatGPT service, signalling a shift in how the platform will be monetised as its user base continues to expand rapidly. The move reflects OpenAI’s plans to turn widespread adoption into a sustainable revenue stream.

The company confirmed that ad testing will begin in the coming weeks, with sponsored content shown at the bottom of relevant ChatGPT responses. OpenAI said advertisements will be clearly labelled and separated from organic answers.

ChatGPT now serves more than 800 million users globally, most of whom currently access the service at no cost. Despite the high valuation, the company has continued to operate at a loss while expanding its infrastructure and AI capabilities.

Advertising represents OpenAI’s latest effort to diversify income beyond paid subscriptions and enterprise services. Sponsored recommendations will be shown only when products or services are deemed relevant to the user’s ongoing conversation.

The shift places OpenAI closer to traditional digital platform business models, raising broader questions about how commercial incentives may shape conversational AI systems as they become central gateways to online information.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

India considers social media bans for children under 16

India is emerging as a potential test case for age-based social media restrictions as several states examine Australia-style bans on children’s access to platforms.

Goa and Andhra Pradesh are studying whether to prohibit social media use for those under 16, citing growing concerns over online safety and youth well-being. The debate has also reached the judiciary, with the Madras High Court urging the federal government to consider similar measures.

The proposals carry major implications for global technology companies, given that India’s internet population exceeds one billion users and continues to skew young.

Platforms such as Meta, Google and X rely heavily on India for long-term growth, advertising revenue and user expansion. Industry voices argue parental oversight is more effective than government bans, warning that restrictions could push minors towards unregulated digital spaces.

Australia’s under-16 ban, which entered force in late 2025, has already exposed enforcement difficulties, particularly around age verification and privacy risks. Determining users’ ages accurately remains challenging, while digital identity systems raise concerns about data security and surveillance.

Legal experts note that internet governance falls under India’s federal authority, limiting what individual states can enforce without central approval.

Although the data protection law of India includes safeguards for children, full implementation will extend through 2027, leaving policymakers to balance child protection, platform accountability and unintended consequences.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Austrian watchdog rules against Microsoft education tracking

Microsoft has been found to have unlawfully placed tracking cookies on a child’s device without valid consent, following a ruling by Austria’s data protection authority.

The case stems from a complaint filed by a privacy group, noyb, concerning Microsoft 365 Education, a platform used by millions of pupils and teachers across Europe.

According to the decision, Microsoft deployed cookies that analysed user behaviour, collected browser data and served advertising purposes, despite being used in an educational context involving minors. The Austrian authority ordered the company to cease the unlawful tracking within four weeks.

Noyb warned the ruling could have broader implications for organisations relying on Microsoft software, particularly schools and public bodies. A data protection lawyer at the group criticised Microsoft’s approach to privacy, arguing that protections appear secondary to marketing considerations.

The ruling follows earlier GDPR findings against Microsoft, including violations of access rights and concerns raised over the European Commission’s own use of Microsoft 365.

Although previous enforcement actions were closed after contractual changes, regulatory scrutiny of Microsoft’s education and public sector products continues.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!