Enterprise AI adoption stalls despite heavy investment

AI has moved from experimentation to expectation, yet many enterprise AI rollouts continue to stall. Boards demand returns, leaders approve tools and governance, but day-to-day workarounds spread, risk grows, and promised value fails to materialise.

The problem rarely lies with the technology itself. Adoption breaks down when AI is treated as an IT deployment rather than an internal product, leaving employees with approved tools but no clear value proposition, limited capacity, and governance that prioritises control over learning.

A global B2B services firm experienced this pattern during an eight-month enterprise AI rollout across commercial teams. Usage dashboards showed activity, but approved platforms failed to align with actual workflows, leading teams to comply superficially or rely on external tools under delivery pressure.

The experience exposed what some leaders describe as the ‘mandate trap’, where adoption is ordered from the top while usability problems fall with middle managers. Hesitation reflected workflow friction and risk rather than resistance, revealing an internal product–market fit issue.

Progress followed when leaders paused broad deployment and refocused on outcomes, workflow redesign, and protected learning time. Narrow pilots and employee-led enterprise AI testing helped scale only tools that reduced friction and earned trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

LegalOn launches agentic AI for in-house legal teams

LegalOn Technologies has introduced five agentic AI tools aimed at transforming in-house legal operations. The company says the agents complete specialised contract and workflow tasks in seconds within its secure platform.

Unlike conventional AI assistants that respond to prompts, the new system is designed to plan and execute multi-step workflows independently, tailoring outputs to each organisation’s templates and standards while keeping lawyers informed of every action.

The suite includes tools for generating playbooks, processing legal intake requests and translating contracts across dozens of languages. Additional agents triage high-volume agreements and produce review-ready drafts from clause libraries and deal inputs.

Founded by two corporate lawyers in Japan, LegalOn now operates across Asia, Europe and North America. Backed by $200m in funding, it serves more than 8,000 clients globally, including Fortune 500 companies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI governance struggles to match rapid adoption

Accelerating AI adoption is exposing clear weaknesses in corporate AI governance. Research shows that while most organisations claim to have oversight processes, only a small minority describe them as mature.

Rapid rollouts across marketing, operations and manufacturing have outpaced safeguards designed to manage bias, transparency and accountability, leaving many firms reacting rather than planning ahead.

Privacy rules, data sovereignty questions and vendor data-sharing risks are further complicating deployment decisions. Fragmented data governance and unclear ownership across departments often stall progress.

Experts argue that effective AI governance must operate as an ongoing, cross-functional model embedded into product lifecycles. Defined accountability, routine audits and clear escalation paths are increasingly viewed as essential for building trust and reducing long-term risk.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI pushes schools to rethink learning priorities

Students speaking at a major education technology conference said AI has revealed weaknesses in traditional learning. Heavy focus on memorisation is becoming less relevant in a world where digital tools provide instant answers.

AI helps learners summarise information and understand complex subjects more easily. Improved access to such tools has made studying more efficient and, in some cases, more engaging.

Teachers have responded by restricting technology use and returning to handwritten assignments. These measures aim to protect academic integrity but have created mixed reactions among students.

Participants supported guided AI use instead of banning it completely. Communication, collaboration and presentation skills were seen as more valuable and less vulnerable to AI shortcuts.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

X given deadline by Brazil to curb Grok sexualised outputs

Brazil has ordered X to immediately stop its chatbot Grok from generating sexually explicit images, escalating international pressure on the platform over the misuse of generative AI tools.

The order, issued on 11 February by Brazil’s National Data Protection Agency and National Consumer Rights Bureau, requires X to prevent the creation of sexualised content involving children, adolescents, or non-consenting adults. Authorities gave the company five days to comply or face legal action and fines.

Officials in Brazil said X claimed to have removed thousands of posts and suspended hundreds of accounts after a January warning. However, follow-up checks found Grok users were still able to generate sexualised deepfakes. Regulators criticised the platform for a lack of transparency in its response.

The move follows growing scrutiny after Indonesia blocked Grok in January, while the UK and France signalled continued pressure. Concerns increased after Grok’s ‘spicy mode’ enabled users to generate explicit images using simple prompts.

According to the Centre for Countering Digital Hate, Grok generated millions of sexualised images within days. X and its parent company, xAI, announced measures in mid-January to restrict such outputs in certain jurisdictions, but regulators said it remains unclear where those safeguards apply.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Women driving tech innovation as Web Summit marks 10 years

Web Summit’s Women in Tech programme marked a decade of work in Qatar by highlighting steady progress in female participation across global technology sectors.

The Web Summit event recorded an increase in women-founded startups and reflected rising engagement in Qatar, where female founders reached 38 percent.

Leaders from the initiative noted how supportive networks, mentorship, and access to role models are reshaping opportunities for women in technology and entrepreneurship.

Speakers from IBM and other companies focused on the importance of AI skills in shaping the future workforce. They argued that adequate preparation depends on understanding how AI shapes everyday roles, rather than relying solely on technical tools.

IBM’s SkillsBuild platform continues to partner with universities, schools, and nonprofit groups to expand access to recognised AI credentials that can support higher earning potential and new career pathways.

Another feature of the event was its emphasis on inclusion as a driver of innovation. The African Women in Technology initiative, led by Anie Akpe, is working to offer free training in cybersecurity and AI so women in emerging markets can benefit from new digital opportunities.

These efforts aim to support business growth at every level, even for women operating in local markets, who can use technology to reach wider communities.

Female founders also used the platform to showcase new health technology solutions.

ScreenMe, a Qatari company founded by Dr Golnoush Golsharazi, presented its reproductive microbiome testing service, created in response to long-standing gaps in women’s health research and screening.

Organisers expressed confidence that women-led innovation will expand across the region, supported by rising investment and continuing visibility at major global events.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Hackers abuse legitimate admin software to hide cyber attacks

Cybercriminals are increasingly abusing legitimate administrative software to access corporate networks, making malicious activity harder to detect. Attackers are blending into normal operations by relying on trusted workforce and IT management tools rather than custom malware.

Recent campaigns have repurposed ‘Net Monitor for Employees Professional’ and ‘SimpleHelp’, tools usually used for staff oversight and remote support. Screen viewing, file management, and command features were exploited to control systems without triggering standard security alerts.

Researchers at Huntress identified the activity in early 2026, finding that the tools were used to maintain persistent, hidden access. Analysis showed that attackers were actively preparing compromised systems for follow-on attacks rather than limiting their activity to surveillance.

The access was later linked to attempts to deploy ‘Crazy’ ransomware and steal cryptocurrency, with intruders disguising the software as legitimate Microsoft services. Monitoring agents were often renamed to resemble standard cloud processes, thereby remaining active without attracting attention.

Huntress advised organisations to limit software installation rights, enforce multi-factor authentication, and audit networks for unauthorised management tools. Monitoring for antivirus tampering and suspicious program names remains critical for early detection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global coalition demands ban on AI-nudification tools over child-safety fears

More than 100 organisations have urged governments to outlaw AI-nudification tools after a surge in non-consensual digital images.

Groups such as Amnesty International, the European Commission, and Interpol argue that the technology now fuels harmful practices that undermine human dignity and child safety. Their concerns intensified after the Grok nudification scandal, where users created sexualised images from ordinary photographs.

Campaigners warn that the tools often target women and children instead of staying within any claimed adult-only environment. Millions of manipulated images have circulated across social platforms, with many linked to blackmail, coercion and child sexual abuse material.

Experts say the trauma caused by these AI images is no less serious because the abuse occurs online.

Organisations within the coalition maintain that tech companies already possess the ability to detect and block such material but have failed to apply essential safeguards.

They want developers and platforms to be held accountable and believe that strict prohibitions are now necessary to prevent further exploitation. Advocates argue that meaningful action is overdue and that protection of users must take precedence over commercial interests.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google acquisition of Wiz cleared under EU merger rules

The European Commission has unconditionally approved Google’s proposed acquisition of Wiz under the EU Merger Regulation, concluding that the deal raises no competition concerns in the European Economic Area.

The assessment focused on the fast-growing cloud security market, where both companies are active. Google provides cloud infrastructure and security services via Google Cloud Platform, while Wiz offers a cloud-native application protection platform for multi-cloud environments.

Regulators examined whether Google could restrict competition by bundling Wiz’s tools or limiting interoperability with rival cloud providers. The market investigation found customers would retain access to credible alternatives and could switch suppliers if needed.

The Commission also considered whether the acquisition would give Google access to commercially sensitive data relating to competing cloud infrastructure providers. Feedback from customers and rivals indicated that the data involved is not sensitive and is generally accessible to other cloud security firms.

Based on these findings, the Commission concluded that the transaction would not significantly impede effective competition in any relevant market. The deal was therefore cleared unconditionally following a Phase I review.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI controls animal behaviour using light-guided technology

Scientists at Nagoya University have developed an advanced AI system capable of identifying specific animal behaviours with over 90% accuracy and controlling the brain circuits that drive them in real-time across multiple species.

The system, named YORU (Your Optimal Recognition Utility), recognises entire behaviours from single video frames rather than tracking individual body parts, making it 30% faster than previous tools.

Researchers demonstrated the technology’s precision by combining it with optogenetics to silence a male fruit fly’s courtship song mid-performance, causing the unimpressed female to walk away.

The breakthrough lies in the system’s ability to target individual animals within social groups, so previous optogenetic methods illuminated entire laboratory chambers, affecting all subjects simultaneously.

YORU’s AI-driven light source can now track and manipulate a single subject’s neurons whilst its neighbours move freely nearby. The tool has proven its versatility across diverse species, successfully analysing food-sharing in ants, social orientation in zebrafish, and grooming patterns in mice.

Requiring minimal training data and no programming skills, YORU is available online for researchers worldwide studying the neural mechanisms underlying social interactions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot