UNESCO and HBKU advance research on digital behaviour

Hamad Bin Khalifa University has unveiled the UNESCO Chair on Digital Technologies and Human Behaviour to strengthen global understanding of how emerging tools shape society.

An initiative, located in the College of Science and Engineering in Qatar, that will examine the relationship between digital adoption and human behaviour, focusing on digital well-being, ethical design and healthier online environments.

The Chair is set to address issues such as internet addiction, cyberbullying and misinformation through research and policy-oriented work.

By promoting dialogue among international organisations, governments and academic institutions, the programme aims to support the more responsible development of digital technologies rather than approaches that overlook societal impact.

HBKU’s long-standing emphasis on ethical innovation formed the foundation for the new initiative. The launch event brought together experts from several disciplines to discuss behavioural change driven by AI, mobile computing and social media.

An expert panel considered how GenAI can improve daily life while also increasing dependency, encouraging users to shift towards a more intentional and balanced relationship with AI systems.

UNESCO underlined the importance of linking scientific research with practical policymaking to guide institutions and communities.

The Chair is expected to strengthen cooperation across sectors and support progress on global development goals by ensuring digital transformation remains aligned with human dignity, social cohesion and inclusive growth.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

CERT Polska reports coordinated cyber sabotage targeting Poland’s energy infrastructure

Poland has disclosed a coordinated cyber sabotage campaign targeting more than 30 renewable energy sites in late December 2025. The incidents occurred during severe winter weather and were intended to cause operational disruption, according to CERT Polska.

Electricity generation and heat supply in Poland continued, but attackers disabled communications and remote control systems across multiple facilities. Both IT networks and industrial operational technology were targeted, marking a rare shift toward destructive cyber activity against energy infrastructure.

Investigators found attackers accessed renewable substations through exposed FortiGate devices, often without multi-factor authentication. After breaching networks, they mapped systems, damaged firmware, wiped controllers, and disabled protection relays.

Two previously unknown wiper tools, DynoWiper and LazyWiper, were used to corrupt and delete data without ransom demands. The malware spread through compromised Active Directory systems using malicious Group Policy tasks to trigger simultaneous destruction.

CERT Polska linked the infrastructure to the Russia-connected threat cluster Static Tundra, though some firms suggest Sandworm involvement. The campaign marks the first publicly confirmed destructive operation attributed to this actor, highlighting rising cyber-sabotage risks to critical energy systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-driven scams dominate malicious email campaigns

The Catalan Cybersecurity Agency has warned that generative AI is now being used in the vast majority of email scams containing malicious links. Its Cybersecurity Outlook Report for 2026 found that more than 80% of such messages rely on AI-generated content.

The report shows that 82.6% of emails carrying malicious links include text, video, or voice produced using AI tools, making fraudulent messages increasingly difficult to identify. Scammers use AI to create near-flawless messages that closely mimic legitimate communications.

Agency director Laura Caballero said the sophistication of AI-generated scams means users face greater risks, while businesses and platforms are turning to AI-based defences to counter the threat.

She urged a ‘technology against technology’ approach, combined with stronger public awareness and basic security practices such as two-factor authentication.

Cyber incidents are also rising. The agency handled 3,372 cases in 2024, a 26% increase year on year, mostly involving credential leaks and unauthorised email access.

In response, the Catalan government has launched a new cybersecurity strategy backed by a €18.6 million investment to protect critical public services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Moltbook AI vulnerability exposes user data and API keys

A critical security flaw has emerged in Moltbook, a new AI agent social network launched by Octane AI.

The vulnerability allowed unauthenticated access to user profiles, exposing email addresses, login tokens, and API keys for registered agents. The platform’s rapid growth, claimed to have 1.5 million users, was largely artificial, as a single agent reportedly created hundreds of thousands of fake accounts.

Moltbook enables AI agents to post, comment, and form sub-communities, fostering interactions that range from AI debates to token-related activities.

Analysts warned that prompt injections and unregulated agent interactions could lead to credential theft or destructive actions, including data exfiltration or account hijacking. Experts described the platform as both a milestone in scale and a serious security concern.

Developers have not confirmed any patches, leaving users and enterprises exposed. Security specialists advised revoking API keys, sandboxing AI agents, and auditing potential exposures.

The lack of safeguards on the platform highlights the risks of unchecked AI agent networks, particularly for organisations that may rely on them without proper oversight.

An incident that underscores the growing need for stronger governance in AI-powered social networks. Experts stress that without enforced security protocols, such platforms could be exploited at scale, affecting both individual users and corporate systems.

The Moltbook case serves as a warning about prioritising hype over security in emerging AI applications.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Roblox faces new dutch scrutiny under EU digital rules

Regulators in the Netherlands have opened a formal investigation into Roblox over concerns about inadequate protections for children using the popular gaming platform.

The national authority responsible for enforcing digital rules is examining whether the company has implemented the safeguards required under the Digital Services Act rather than relying solely on voluntary measures.

Officials say children may have been exposed to harmful environments, including violent or sexualised material, as well as manipulative interfaces encouraging more extended play.

The concerns intensify pressure on the EU authorities to monitor social platforms that attract younger users, even when they do not meet the threshold for huge online platforms.

Roblox says it has worked with Dutch regulators for months and recently introduced age checks for users who want to use chat. The company argues that it has invested in systems designed to reinforce privacy, security and safety features for minors.

The Dutch authority plans to conclude the investigation within a year. The outcome could include fines or broader compliance requirements and is likely to influence upcoming European rules on gaming and consumer protection, due later in the decade.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

GDPR violation reports surge across Europe in 2025, study finds

European data protection authorities recorded a sharp rise in GDPR violation reports in 2025, according to a new study by law firm DLA Piper, signalling growing regulatory pressure across the European Union.

Average daily reports surpassed 400 for the first time since the regulation entered force in 2018, reaching 443 incidents per day, a 22% increase compared with the previous year. The firm noted that expanding digital systems, new breach reporting laws, and geopolitical cyber risks may be driving the surge.

Despite the higher number of cases in the EU, total fines remained broadly stable at around €1.2 billion for the year, pushing cumulative GDPR penalties since 2018 to €7.1 billion, underlining regulators’ continued willingness to impose major sanctions.

Ireland once again led enforcement figures, with fines imposed by its Data Protection Commission totaling €4.04 billion, reflecting the presence of major technology firms headquartered there, including Meta, Google, and Apple.

Recent headline penalties included a €1.2 billion fine against Meta and a €530 million sanction against TikTok over data transfers to China, while courts across Europe increasingly consider compensation claims linked to GDPR violations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU confronts Grok abuse as Brussels tests its digital power

The European Commission has opened a formal investigation into Grok after the tool produced millions of sexualised images of women and children.

A scrutiny that centres on whether X failed to carry out adequate risk assessments before releasing the undressing feature in the European market. The case arrives as ministers, including Sweden’s deputy prime minister, publicly reveal being targeted by the technology.

Brussels is preparing to use its strongest digital laws instead of deferring to US pressure. The Digital Services Act allows the European Commission to fine major platforms or force compliance measures when systemic harms emerge.

Experts argue the Grok investigation represents an important test of European resolve, particularly as the bloc tries to show it can hold powerful companies to account.

Concerns remain about the willingness of the EU to act decisively. Reports suggest the opening of the probe was delayed because of a tariff dispute with Washington, raising questions about whether geopolitical considerations slowed the enforcement response.

Several lawmakers say the delay undermined confidence in the bloc’s commitment to protecting fundamental rights.

The investigation could last months and may have wider implications for content ranking systems already under scrutiny.

Critics say financial penalties may not be enough to change behaviour at X, yet the case is still viewed as a pivotal moment for European digital governance. Observers believe a firm outcome would demonstrate that emerging harms linked to synthetic media cannot be ignored.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Millions use Telegram to create AI deepfake nudes as digital abuse escalates

A global wave of deepfake abuse is spreading across Telegram as millions of users generate and share sexualised images of women without consent.

Researchers have identified at least 150 active channels offering AI-generated nudes of celebrities, influencers and ordinary women, often for payment. The widespread availability of advanced AI tools has turned intimate digital abuse into an industrialised activity.

Telegram states that deepfake pornography is banned and says moderators removed nearly one million violating posts in 2025. Yet new channels appear immediately after old ones are shut, enabling users to exchange tips on how to bypass safety controls.

The rise of nudification apps on major app stores, downloaded more than 700 million times, adds further momentum to an expanding ecosystem that encourages harassment rather than accountability.

Experts argue that the celebration of such content reflects entrenched misogyny instead of simple technological misuse. Women targeted by deepfakes face isolation, blackmail, family rejection and lost employment opportunities.

Legal protections remain minimal in much of the world, with fewer than 40% of countries having laws that address cyber-harassment or stalking.

Campaigners warn that women in low-income regions face the most significant risks due to poor digital literacy, limited resources and inadequate regulatory frameworks.

The damage inflicted on victims is often permanent, as deepfake images circulate indefinitely across platforms and are impossible to remove, undermining safety, dignity and long-term opportunities comprehensively.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Fake AI assistant steals OpenAI credentials from thousands of Chrome users

A Chrome browser extension posing as an AI assistant has stolen OpenAI credentials from more than 10,000 users. Cybersecurity platform Obsidian identified the malicious software, known as H-Chat Assistant, which secretly harvested API keys and transmitted user data to hacker-controlled servers.

The extension, initially called ChatGPT Extension, appeared to function normally after users provided their OpenAI API keys. Analysts discovered that the theft occurred when users deleted chats or logged out, triggering the transmission of credentials via hardcoded Telegram bot credentials.

At least 459 unique API keys were exfiltrated to a Telegram channel months before they were discovered in January 2025.

Researchers believe the malicious activity began in July 2024 and continued undetected for months. Following disclosure to OpenAI on 13 January, the company revoked compromised API keys, though the extension reportedly remained available in the Chrome Web Store.

Security analysts identified 16 related extensions sharing the identical developer fingerprints, suggesting a coordinated campaign by a single threat actor.

LayerX Security consultant Natalie Zargarov warned that whilst current download numbers remain relatively low, AI-focused browser extensions could rapidly surge in popularity.

The malicious extensions exploit vulnerabilities in web-based authentication processes, creating, as researchers describe, a ‘materially expanded browser attack surface’ through deep integration with authenticated web applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

SoundCloud breach exposes nearly 30 million users

SoundCloud disclosed a major data breach in December 2025, confirming that around 29.8 million global user accounts were affected. The incident represents one of the largest security failures involving a global music streaming platform.

The privacy breach exposed email addresses alongside public profile information, including usernames, display names and follower data. SoundCloud said passwords and payment details were not accessed, but the combined data increases the risk of phishing.

SoundCloud detected unauthorised activity in December 2025 and launched an internal investigation. Attackers reportedly exploited a flaw that linked public profile data with private email addresses at scale.

After SoundCloud refused an extortion demand, the stolen dataset was released publicly. SoundCloud has urged users worldwide to monitor accounts closely and enable stronger security protections.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot