Women driving tech innovation as Web Summit marks 10 years

Web Summit’s Women in Tech programme marked a decade of work in Qatar by highlighting steady progress in female participation across global technology sectors.

The Web Summit event recorded an increase in women-founded startups and reflected rising engagement in Qatar, where female founders reached 38 percent.

Leaders from the initiative noted how supportive networks, mentorship, and access to role models are reshaping opportunities for women in technology and entrepreneurship.

Speakers from IBM and other companies focused on the importance of AI skills in shaping the future workforce. They argued that adequate preparation depends on understanding how AI shapes everyday roles, rather than relying solely on technical tools.

IBM’s SkillsBuild platform continues to partner with universities, schools, and nonprofit groups to expand access to recognised AI credentials that can support higher earning potential and new career pathways.

Another feature of the event was its emphasis on inclusion as a driver of innovation. The African Women in Technology initiative, led by Anie Akpe, is working to offer free training in cybersecurity and AI so women in emerging markets can benefit from new digital opportunities.

These efforts aim to support business growth at every level, even for women operating in local markets, who can use technology to reach wider communities.

Female founders also used the platform to showcase new health technology solutions.

ScreenMe, a Qatari company founded by Dr Golnoush Golsharazi, presented its reproductive microbiome testing service, created in response to long-standing gaps in women’s health research and screening.

Organisers expressed confidence that women-led innovation will expand across the region, supported by rising investment and continuing visibility at major global events.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Global coalition demands ban on AI-nudification tools over child-safety fears

More than 100 organisations have urged governments to outlaw AI-nudification tools after a surge in non-consensual digital images.

Groups such as Amnesty International, the European Commission, and Interpol argue that the technology now fuels harmful practices that undermine human dignity and child safety. Their concerns intensified after the Grok nudification scandal, where users created sexualised images from ordinary photographs.

Campaigners warn that the tools often target women and children instead of staying within any claimed adult-only environment. Millions of manipulated images have circulated across social platforms, with many linked to blackmail, coercion and child sexual abuse material.

Experts say the trauma caused by these AI images is no less serious because the abuse occurs online.

Organisations within the coalition maintain that tech companies already possess the ability to detect and block such material but have failed to apply essential safeguards.

They want developers and platforms to be held accountable and believe that strict prohibitions are now necessary to prevent further exploitation. Advocates argue that meaningful action is overdue and that protection of users must take precedence over commercial interests.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Growing reliance on AI sparks worries for young users

Research from the UK Safer Internet Centre reveals nearly all young people aged eight to 17 now use artificial intelligence tools, highlighting how deeply the technology has entered daily life. Growing adoption has also increased reliance, with many teenagers using AI regularly for schoolwork, social interactions and online searches.

Education remains one of the main uses, with students turning to AI for homework support and study assistance. However, concerns about fairness and creativity have emerged, as some pupils worry about false accusations of misuse and reduced independent thinking.

Safety fears remain significant, especially around harmful content and privacy risks linked to AI-generated images. Many teenagers and parents worry the technology could be used to create inappropriate or misleading visuals, raising questions about online protection.

Emotional and social impacts are also becoming clear, with some young people using AI for personal advice or practising communication. Limited parental guidance and growing dependence suggest governments and schools may soon consider stronger oversight and clearer rules.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

TikTok accused of breaching EU digital safety rules

The European Commission has concluded that TikTok’s design breaches the Digital Services Act by encouraging compulsive use and failing to protect users, particularly children and teenagers.

Preliminary findings say the platform relies heavily on features such as infinite scroll, which automatically delivers new videos and makes disengagement difficult.

Regulators argue that such mechanisms place users into habitual patterns of repeated viewing rather than supporting conscious choice. EU officials found that safeguards introduced by TikTok do not adequately reduce the risks linked to excessive screen time.

Daily screen time limits were described as ineffective because alerts are easy to dismiss, even for younger users who receive automatic restrictions. Parental control tools were also criticised for requiring significant effort, technical knowledge and ongoing involvement from parents.

Henna Virkkunen, the Commission’s executive vice-president for tech sovereignty, security and democracy, said addictive social media design can harm the development of young people. European law, she said, makes platforms responsible for the effects their services have on users.

Regulators concluded that compliance with the Digital Services Act would require TikTok to alter core elements of its product, including changes to infinite scroll, recommendation systems and screen break features.

TikTok rejected the findings, calling them inaccurate and saying the company would challenge the assessment. The platform argues that it already offers a range of tools, including sleep reminders and wellbeing features, to help users manage their time.

The investigation remains ongoing and no penalties have yet been imposed. A final decision could still result in enforcement measures, including fines of up to six per cent of TikTok’s global annual turnover.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Slovenia plans social media ban for children under 15

Among several countries lately, Slovenia is also moving towards banning access to social media platforms for children under the age of 15, as the government prepares draft legislation aimed at protecting minors online.

Deputy Prime Minister Matej Arčon said the Education Ministry initiated the proposal and would be developed with input from professionals.

The planned law would apply to major social networks where user-generated content is shared, including TikTok, Snapchat and Instagram. Arčon said the initiative reflects growing international concern over the impact of social media on children’s mental health, privacy and exposure to addictive design features.

Slovenia’s move follows similar debates and proposals across Europe and beyond. Countries such as Italy, France, Spain, UK, Greece and Austria have considered restrictions, while Australia has already introduced a nationwide minimum age for social media use.

Spain’s prime minister recently defended proposed limits, arguing that technology companies should not influence democratic decision-making.

Critics of such bans warn of potential unintended consequences. Telegram founder Pavel Durov has argued that age-based restrictions could lead to broader data collection and increased state control over online content.

Despite these concerns, Slovenia’s government appears determined to proceed, positioning the measure as part of a broader effort to strengthen child protection in the digital space.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU split widens over ban on AI nudification apps

European lawmakers remain divided over whether AI tools that generate non-consensual sexual images should face an explicit ban in the EU legislation.

The split emerged as debate intensified over the AI simplification package, which is moving through Parliament and the Council rather than remaining confined to earlier negotiations.

Concerns escalated after Grok was used to create images that digitally undressed women and children.

The EU regulators responded by launching an investigation under the Digital Services Act, and the Commission described the behaviour as illegal under existing European rules. Several lawmakers argue that the AI Act should name pornification apps directly instead of relying on broader legal provisions.

Lead MEPs did not include a ban in their initial draft of the Parliament’s position, prompting other groups to consider adding amendments. Negotiations continue as parties explore how such a restriction could be framed without creating inconsistencies within the broader AI framework.

The Commission appears open to strengthening the law and has hinted that the AI omnibus could be an appropriate moment to act. Lawmakers now have a limited time to decide whether an explicit prohibition can secure political agreement before the amendment deadline passes.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Under 16 social media ban proposed in Spain

Spain is preparing legislation to ban social media access for users under 16, with the proposal expected to be introduced within days. Prime Minister Pedro Sánchez framed the move as a child-protection measure aimed at reducing exposure to harmful online environments.

Government plans include mandatory age-verification systems for platforms, designed to serve as practical barriers rather than symbolic safeguards. Officials argue that minors face escalating risks online, including addiction, exploitation, violent content, and manipulation.

Additional provisions could hold technology executives legally accountable for unlawful or hateful content that remains online. The proposal reflects a broader regulatory shift toward platform responsibility and stricter enforcement standards.

Momentum for youth restrictions is building across Europe. France and Denmark are pursuing similar controls, while the EU Digital Services Act guidelines allow member states to define a national ‘digital majority age’.

The European Commission is also testing an age verification app, with wider deployment expected next year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ofcom expands scrutiny of X over Grok deepfake concerns

The British regulator, Ofcom, has released an update on its investigation into X after reports that the Grok chatbot had generated sexual deepfakes of real people, including minors.

As such, the regulator initiated a formal inquiry to assess whether X took adequate steps to manage the spread of such material and to remove it swiftly.

X has since introduced measures to limit the distribution of manipulated images, while the ICO and regulators abroad have opened parallel investigations.

The Online Safety Act does not cover all chatbot services, as regulation depends on whether a system enables user interactions, provides search functionality, or produces pornographic material.

Many AI chatbots fall partly or entirely outside the Act’s scope, limiting regulators’ ability to act when harmful content is created during one-to-one interactions.

Ofcom cannot currently investigate the standalone Grok service for producing illegal images because the Act does not cover that form of generation.

Evidence-gathering from X continues, with legally binding information requests issued to the company. Ofcom will offer X a full opportunity to present representations before any provisional findings are published.

Enforcement actions take several months, since regulators must follow strict procedural safeguards to ensure decisions are robust and defensible.

Ofcom added that people who encounter harmful or illegal content online are encouraged to report it directly to the relevant platforms. Incidents involving intimate images can be reported to dedicated services for adults or support schemes for minors.

Material that may constitute child sexual abuse should be reported to the Internet Watch Foundation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

France targets X over algorithm abuse allegations

The cybercrime unit of the Paris prosecutor has raided the French office of X as part of an expanding investigation into alleged algorithm manipulation and illicit data extraction.

Authorities said the probe began in 2025 after a lawmaker warned that biassed algorithms on the platform might have interfered with automated data systems. Europol supported the operation together with national cybercrime officers.

Prosecutors confirmed that the investigation now includes allegations of complicity in circulating child sex abuse material, sexually explicit deepfakes and denial of crimes against humanity.

Elon Musk and former chief executive Linda Yaccarino have been summoned for questioning in April in their roles as senior figures of the company at the time.

The prosecutor’s office also announced its departure from X in favour of LinkedIn and Instagram, rather than continuing to use the platform under scrutiny.

X strongly rejected the accusations and described the raid as politically motivated. Musk claimed authorities should focus on pursuing sex offenders instead of targeting the company.

The platform’s government affairs team said the investigation amounted to law enforcement theatre rather than a legitimate examination of serious offences.

Regulatory pressure increased further as the UK data watchdog opened inquiries into both X and xAI over concerns about Grok producing sexualised deepfakes. Ofcom is already conducting a separate investigation that is expected to take months.

The widening scrutiny reflects growing unease around alleged harmful content, political interference and the broader risks linked to large-scale AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Grok returns to Indonesia as X agrees to tightened oversight

Indonesia has restored access to Grok after receiving guarantees from X that stronger safeguards will be introduced to prevent further misuse of the AI tool.

Authorities suspended the service last month following the spread of sexualised images on the platform, making Indonesia the first country to block the system.

Officials from the Ministry of Communications and Digital Affairs said that access had been reinstated on a conditional basis after X submitted a written commitment outlining concrete measures to strengthen compliance with national law.

The ministry emphasised that the document serves as a starting point for evaluation instead of signalling the end of supervision.

However, the government warned that restrictions could return if Grok fails to meet local standards or if new violations emerge. Indonesian regulators stressed that monitoring would remain continuous, and access could be withdrawn immediately should inconsistencies be detected.

The decision marks a cautious reopening rather than a full reinstatement, reflecting Indonesia’s wider efforts to demand greater accountability from global platforms deploying advanced AI systems within its borders.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!