EU faces tension over potential ban on AI ‘pornification’

Lawmakers in the European Parliament remain divided over whether a direct ban on AI-driven ‘pornification’ should be added to the emerging digital omnibus.

Left-wing members push for an explicit prohibition, arguing that synthetic sexual imagery generated without consent has created a rapidly escalating form of online abuse. They say a strong legal measure is required instead of fragmented national responses.

Centre and liberal groups take a different position by promoting lighter requirements for industrial AI and seeking clarity on how any restrictions would interact with the AI Act.

They warn that an unrefined ban could spill over into general-purpose models and complicate enforcement across the European market. Their priority is a more predictable regulatory environment for companies developing high-volume AI systems.

Key figures across the political spectrum, including lawmakers such as Assita Kanko, Axel Voss and Brando Benifei, continue to debate how far the omnibus should go.

Some argue that safeguarding individuals from non-consensual sexual deepfakes must outweigh concerns about administrative burdens, while others insist that proportionality and technical feasibility need stronger assessment.

The lack of consensus leaves the proposal in a delicate phase as negotiations intensify. Lawmakers now face growing public scrutiny over how Europe will respond to the misuse of generative AI.

A clear stance from the Parliament is still pending, rather than an assured path toward agreement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Saudi Arabia recasts Vision 2030 with new priorities

The new phase of Vision 2030 is being steered toward technology, digital infrastructure and advanced industry by Saudi Arabia instead of relying on large urban construction schemes.

Officials highlight the need to support sectors that can accelerate innovation, strengthen data capabilities and expand the kingdom’s role in global tech development.

The move aligns with ongoing efforts to diversify the economy and build long-term competitiveness in areas such as smart manufacturing, logistics technology and clean energy systems.

Recent adjustments involve scaling back or rescheduling some giga projects so that investment can be channelled toward initiatives with strong digital and technological potential.

Elements of the NEOM programme have been revised, while funding attention is shifting to areas that enable automation, renewable technologies and high-value services.

Saudi Arabia aims to position Riyadh as a regional hub for research, emerging technologies and advanced industries. Officials stress that Vision 2030 remains active, yet its next stage will focus on projects that can accelerate technological adoption and strengthen economic resilience.

The Public Investment Fund continues to guide investment toward ecosystems that support innovation, including clean energy, digital infrastructure and international technology partnerships.

An approach that reflects earlier recommendations to match economic planning with evolving skills, future labour market needs and opportunities in fast-growing sectors.

Analysts note that the revised direction prioritises sustainable growth by expanding the kingdom’s participation in global technological development instead of relying mainly on construction-driven momentum.

Social and regulatory reforms connected to digital transformation also remain part of the Vision 2030 agenda. Investments in training, digital literacy and workforce development are intended to ensure that young people can participate fully in the technology sectors the kingdom is prioritising.

With such a shift, the government seeks to balance long-term economic diversification with practical technological goals that reinforce innovation and strengthen the country’s competitive position.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Women driving tech innovation as Web Summit marks 10 years

Web Summit’s Women in Tech programme marked a decade of work in Qatar by highlighting steady progress in female participation across global technology sectors.

The Web Summit event recorded an increase in women-founded startups and reflected rising engagement in Qatar, where female founders reached 38 percent.

Leaders from the initiative noted how supportive networks, mentorship, and access to role models are reshaping opportunities for women in technology and entrepreneurship.

Speakers from IBM and other companies focused on the importance of AI skills in shaping the future workforce. They argued that adequate preparation depends on understanding how AI shapes everyday roles, rather than relying solely on technical tools.

IBM’s SkillsBuild platform continues to partner with universities, schools, and nonprofit groups to expand access to recognised AI credentials that can support higher earning potential and new career pathways.

Another feature of the event was its emphasis on inclusion as a driver of innovation. The African Women in Technology initiative, led by Anie Akpe, is working to offer free training in cybersecurity and AI so women in emerging markets can benefit from new digital opportunities.

These efforts aim to support business growth at every level, even for women operating in local markets, who can use technology to reach wider communities.

Female founders also used the platform to showcase new health technology solutions.

ScreenMe, a Qatari company founded by Dr Golnoush Golsharazi, presented its reproductive microbiome testing service, created in response to long-standing gaps in women’s health research and screening.

Organisers expressed confidence that women-led innovation will expand across the region, supported by rising investment and continuing visibility at major global events.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Hackers abuse legitimate admin software to hide cyber attacks

Cybercriminals are increasingly abusing legitimate administrative software to access corporate networks, making malicious activity harder to detect. Attackers are blending into normal operations by relying on trusted workforce and IT management tools rather than custom malware.

Recent campaigns have repurposed ‘Net Monitor for Employees Professional’ and ‘SimpleHelp’, tools usually used for staff oversight and remote support. Screen viewing, file management, and command features were exploited to control systems without triggering standard security alerts.

Researchers at Huntress identified the activity in early 2026, finding that the tools were used to maintain persistent, hidden access. Analysis showed that attackers were actively preparing compromised systems for follow-on attacks rather than limiting their activity to surveillance.

The access was later linked to attempts to deploy ‘Crazy’ ransomware and steal cryptocurrency, with intruders disguising the software as legitimate Microsoft services. Monitoring agents were often renamed to resemble standard cloud processes, thereby remaining active without attracting attention.

Huntress advised organisations to limit software installation rights, enforce multi-factor authentication, and audit networks for unauthorised management tools. Monitoring for antivirus tampering and suspicious program names remains critical for early detection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global coalition demands ban on AI-nudification tools over child-safety fears

More than 100 organisations have urged governments to outlaw AI-nudification tools after a surge in non-consensual digital images.

Groups such as Amnesty International, the European Commission, and Interpol argue that the technology now fuels harmful practices that undermine human dignity and child safety. Their concerns intensified after the Grok nudification scandal, where users created sexualised images from ordinary photographs.

Campaigners warn that the tools often target women and children instead of staying within any claimed adult-only environment. Millions of manipulated images have circulated across social platforms, with many linked to blackmail, coercion and child sexual abuse material.

Experts say the trauma caused by these AI images is no less serious because the abuse occurs online.

Organisations within the coalition maintain that tech companies already possess the ability to detect and block such material but have failed to apply essential safeguards.

They want developers and platforms to be held accountable and believe that strict prohibitions are now necessary to prevent further exploitation. Advocates argue that meaningful action is overdue and that protection of users must take precedence over commercial interests.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

South Korea confirms scale of Coupang data breach

The South Korean government has confirmed that 33.67 million user accounts were exposed in a major data breach at Coupang in South Korea. The findings were released by the Ministry of Science and ICT in Seoul.

Investigators in South Korea said names and email addresses were leaked, while delivery lists containing addresses and phone numbers were accessed 148 million times. Officials warned that the impact in South Korea could extend beyond the headline account figure.

Authorities in South Korea identified a former employee as the attacker, alleging misuse of authentication signing keys. The probe concluded that weaknesses in internal controls at Coupang enabled the breach in South Korea.

The ministry in South Korea criticised delayed reporting and plans to impose a fine on Coupang. The company disputed aspects of the findings but said 33.7 million accounts were involved in South Korea.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

eSafety escalates scrutiny of Roblox safety measures

Australia’s online safety regulator has notified Roblox of plans to directly test how the platform has implemented a set of child safety commitments agreed last year, amid growing concerns over online grooming and sexual exploitation.

In September last year, Roblox made nine commitments following months of engagement with eSafety, aimed at supporting compliance with obligations under the Online Safety Act and strengthening protections for children in Australia.

Measures included making under-16s’ accounts private by default, restricting contact between adults and minors without parental consent, disabling chat features until age estimation is complete, and extending parental controls and voice chat restrictions for younger users.

Roblox told eSafety at the end of 2025 that it had delivered all agreed commitments, after which the regulator continued monitoring implementation. eSafety Commissioner Julie Inman Grant said serious concerns remain over reports of child exploitation and harmful material on the platform.

Direct testing will now examine how the measures work in practice, with support from the Australian Government. Enforcement action may follow, including penalties of up to $49.5 million, alongside checks against new age-restricted content rules from 9 March.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Russia tightens controls as Telegram faces fresh restrictions

Authorities in Russia have tightened their grip on Telegram after the state regulator Roskomnadzor introduced new measures accusing the platform of failing to curb fraud and safeguard personal data.

Users across the country have increasingly reported slow downloads and disrupted media content since January, with complaints rising sharply early in the week. Although officials initially rejected claims of throttling, industry sources insist that download speeds have been deliberately reduced.

Telegram’s founder, Pavel Durov, argues that Roskomnadzor is trying to steer people toward Max rather than allowing open competition. Max is a government-backed messenger widely viewed by critics as a tool for surveillance and political control.

While text messages continue to load normally for most, media content such as videos, images and voice notes has become unreliable, particularly on mobile devices. Some users report that only the desktop version performs without difficulty.

The slowdown is already affecting daily routines, as many Russians rely on Telegram for work communication and document sharing, much as workplaces elsewhere rely on Slack rather than email.

Officials also use Telegram to issue emergency alerts, and regional leaders warn that delays could undermine public safety during periods of heightened military activity.

Pressure on foreign platforms has grown steadily. Restrictions on voice and video calls were introduced last summer, accompanied by claims that criminals and hostile actors were using Telegram and WhatsApp.

Meanwhile, Max continues to gain users, reaching 70 million monthly accounts by December. Despite its rise, it remains behind Telegram and WhatsApp, which still dominate Russia’s messaging landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Facebook boosts user creativity with new Meta AI animations

Meta has introduced a new group of Facebook features that rely on Meta AI to expand personal expression across profiles, photos and Stories.

Users gain the option to animate their profile pictures, turning a still image into a short motion clip that reflects their mood instead of remaining static. Effects such as waves, confetti, hearts and party hats offer simple tools for creating a more playful online presence.

The update also includes Restyle, a tool that reimagines Stories and Memories through preset looks or AI-generated prompts. Users may shift an ordinary photograph into an illustrated, anime or glowy aesthetic, or adjust lighting and colour to match a chosen theme instead of limiting themselves to basic filters.

Facebook will highlight Memories that work well with the Restyle function to encourage wider use.

Feed posts receive a change of their own through animated backgrounds that appear gradually across accounts. People can pair text updates with visual backdrops such as ocean waves or falling leaves, creating messages that stand out instead of blending into the timeline.

Seasonal styles will arrive throughout the year to support festive posts and major events.

Meta aims to encourage more engaging interactions by giving users easy tools for playful creativity. The new features are designed to support expressive posts that feel more personal and more visually distinctive, helping users craft share-worthy moments across the platform.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

India enforces a three-hour removal rule for AI-generated deepfake content

Strict new rules have been introduced in India for social media platforms in an effort to curb the spread of AI-generated and deepfake material.

Platforms must label synthetic content clearly and remove flagged posts within three hours instead of allowing manipulated material to circulate unchecked. Government notifications and court orders will trigger mandatory action, creating a fast-response mechanism for potentially harmful posts.

Officials argue that rapid removal is essential as deepfakes grow more convincing and more accessible.

Synthetic media has already raised concerns about public safety, misinformation and reputational harm, prompting the government to strengthen oversight of online platforms and their handling of AI-generated imagery.

The measure forms part of a broader push by India to regulate digital environments and anticipate the risks linked to advanced AI tools.

Authorities maintain that early intervention and transparency around manipulated content are vital for public trust, particularly during periods of political sensitivity or high social tension.

Platforms are now expected to align swiftly with the guidelines and cooperate with legal instructions. The government views strict labelling and rapid takedowns as necessary steps to protect users and uphold the integrity of online communication across India.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!