Saudi Arabia recasts Vision 2030 with new priorities

The new phase of Vision 2030 is being steered toward technology, digital infrastructure and advanced industry by Saudi Arabia instead of relying on large urban construction schemes.

Officials highlight the need to support sectors that can accelerate innovation, strengthen data capabilities and expand the kingdom’s role in global tech development.

The move aligns with ongoing efforts to diversify the economy and build long-term competitiveness in areas such as smart manufacturing, logistics technology and clean energy systems.

Recent adjustments involve scaling back or rescheduling some giga projects so that investment can be channelled toward initiatives with strong digital and technological potential.

Elements of the NEOM programme have been revised, while funding attention is shifting to areas that enable automation, renewable technologies and high-value services.

Saudi Arabia aims to position Riyadh as a regional hub for research, emerging technologies and advanced industries. Officials stress that Vision 2030 remains active, yet its next stage will focus on projects that can accelerate technological adoption and strengthen economic resilience.

The Public Investment Fund continues to guide investment toward ecosystems that support innovation, including clean energy, digital infrastructure and international technology partnerships.

An approach that reflects earlier recommendations to match economic planning with evolving skills, future labour market needs and opportunities in fast-growing sectors.

Analysts note that the revised direction prioritises sustainable growth by expanding the kingdom’s participation in global technological development instead of relying mainly on construction-driven momentum.

Social and regulatory reforms connected to digital transformation also remain part of the Vision 2030 agenda. Investments in training, digital literacy and workforce development are intended to ensure that young people can participate fully in the technology sectors the kingdom is prioritising.

With such a shift, the government seeks to balance long-term economic diversification with practical technological goals that reinforce innovation and strengthen the country’s competitive position.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI model achieves accurate detection of placenta accreta spectrum in high-risk pregnancies

A new AI model has shown strong potential for detecting placenta accreta spectrum, a dangerous condition that often goes undiagnosed during pregnancy.

Researchers presented the findings at the annual meeting of the Society for Maternal-Fetal Medicine, highlighting that traditional screening identifies only about half of all cases.

Placenta accreta spectrum arises when the placenta attaches abnormally to the uterine wall, often after previous surgical procedures such as caesarean delivery.

The condition can trigger severe haemorrhage, organ failure, and death, yet many pregnancies with elevated risk receive inconclusive or incorrect assessments through standard ultrasound examinations.

A study that involved a retrospective review by specialists at the Baylor College of Medicine, who analysed 2D obstetric ultrasound images from 113 high-risk pregnancies managed at the Texas Children’s Hospital between 2018 and 2025.

The AI system detected every confirmed case of placenta accreta spectrum, produced two false positives, and generated no false negatives.

Researchers believe such technology could significantly improve early identification and clinical preparation.

They argue that AI screening, when used in addition to current methods, may reduce maternal complications and support safer outcomes for patients facing this increasingly common condition.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Enterprise AI adoption stalls despite heavy investment

AI has moved from experimentation to expectation, yet many enterprise AI rollouts continue to stall. Boards demand returns, leaders approve tools and governance, but day-to-day workarounds spread, risk grows, and promised value fails to materialise.

The problem rarely lies with the technology itself. Adoption breaks down when AI is treated as an IT deployment rather than an internal product, leaving employees with approved tools but no clear value proposition, limited capacity, and governance that prioritises control over learning.

A global B2B services firm experienced this pattern during an eight-month enterprise AI rollout across commercial teams. Usage dashboards showed activity, but approved platforms failed to align with actual workflows, leading teams to comply superficially or rely on external tools under delivery pressure.

The experience exposed what some leaders describe as the ‘mandate trap’, where adoption is ordered from the top while usability problems fall with middle managers. Hesitation reflected workflow friction and risk rather than resistance, revealing an internal product–market fit issue.

Progress followed when leaders paused broad deployment and refocused on outcomes, workflow redesign, and protected learning time. Narrow pilots and employee-led enterprise AI testing helped scale only tools that reduced friction and earned trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

X given deadline by Brazil to curb Grok sexualised outputs

Brazil has ordered X to immediately stop its chatbot Grok from generating sexually explicit images, escalating international pressure on the platform over the misuse of generative AI tools.

The order, issued on 11 February by Brazil’s National Data Protection Agency and National Consumer Rights Bureau, requires X to prevent the creation of sexualised content involving children, adolescents, or non-consenting adults. Authorities gave the company five days to comply or face legal action and fines.

Officials in Brazil said X claimed to have removed thousands of posts and suspended hundreds of accounts after a January warning. However, follow-up checks found Grok users were still able to generate sexualised deepfakes. Regulators criticised the platform for a lack of transparency in its response.

The move follows growing scrutiny after Indonesia blocked Grok in January, while the UK and France signalled continued pressure. Concerns increased after Grok’s ‘spicy mode’ enabled users to generate explicit images using simple prompts.

According to the Centre for Countering Digital Hate, Grok generated millions of sexualised images within days. X and its parent company, xAI, announced measures in mid-January to restrict such outputs in certain jurisdictions, but regulators said it remains unclear where those safeguards apply.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Codex growth prompts OpenAI to expand access

OpenAI said its new Codex Mac app has surpassed one million downloads just over a week after launch, with overall Codex usage rising by 60% following the release of GPT-5.3-Codex.

The strong uptake has prompted OpenAI to extend free access to Codex for Free and Go users beyond the initial launch promotion. Sam Altman said usage limits for lower tiers may be tightened, but access would remain available so more users can experiment and build.

Separately, OpenAI released a YouTube video showcasing a redesigned Deep Research interface, introducing a full-screen report viewer that opens research outputs in a separate window from the chat interface.

The updated layout includes a table of contents for navigation, hyperlinks, and anchor tags within reports, and a dedicated source panel for verification. Users can also download reports as PDF or Word files, while new controls allow research scopes and sources to be adjusted during generation.

The Deep Research updates are available to Plus and Pro users, with broader access expected soon. OpenAI also confirmed the changes in ChatGPT release notes on 10 February and announced a more minor GPT-5.2 update focused on more measured responses.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AWS chief sees AI shifting from content creation to autonomous task completion

AI is shifting from answering questions to autonomously accomplishing tasks, a transformation AWS CEO Matt Garman believes will unlock far greater enterprise value.

Speaking at AWS re:Invent 2025, Garman explained that AI inference- the computing capability that allows models to generate content, make predictions, and take actions against real-world data-represents a fundamental new building block in computing.

He described it as developers gaining access to a ‘new Lego’ that enables applications to make decisions and complete work independently. The distinction between content generation and task accomplishment carries significant implications for enterprise value.

First-wave generative AI focused on writing emails and summarising documents. Task-accomplishing agents can review insurance claims, cross-reference medical records, and process approved claims without human intervention.

Garman predicts widespread enterprise value creation from agents in 2026. AWS announced Amazon Bedrock AgentCore and three frontier agents at re:Invent 2025, providing organisations with infrastructure to deploy autonomous AI agents at scale.

For business leaders, investments in agents that automate end-to-end workflows will deliver exponentially more return on investment than tools that help employees work faster.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Women driving tech innovation as Web Summit marks 10 years

Web Summit’s Women in Tech programme marked a decade of work in Qatar by highlighting steady progress in female participation across global technology sectors.

The Web Summit event recorded an increase in women-founded startups and reflected rising engagement in Qatar, where female founders reached 38 percent.

Leaders from the initiative noted how supportive networks, mentorship, and access to role models are reshaping opportunities for women in technology and entrepreneurship.

Speakers from IBM and other companies focused on the importance of AI skills in shaping the future workforce. They argued that adequate preparation depends on understanding how AI shapes everyday roles, rather than relying solely on technical tools.

IBM’s SkillsBuild platform continues to partner with universities, schools, and nonprofit groups to expand access to recognised AI credentials that can support higher earning potential and new career pathways.

Another feature of the event was its emphasis on inclusion as a driver of innovation. The African Women in Technology initiative, led by Anie Akpe, is working to offer free training in cybersecurity and AI so women in emerging markets can benefit from new digital opportunities.

These efforts aim to support business growth at every level, even for women operating in local markets, who can use technology to reach wider communities.

Female founders also used the platform to showcase new health technology solutions.

ScreenMe, a Qatari company founded by Dr Golnoush Golsharazi, presented its reproductive microbiome testing service, created in response to long-standing gaps in women’s health research and screening.

Organisers expressed confidence that women-led innovation will expand across the region, supported by rising investment and continuing visibility at major global events.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Hackers abuse legitimate admin software to hide cyber attacks

Cybercriminals are increasingly abusing legitimate administrative software to access corporate networks, making malicious activity harder to detect. Attackers are blending into normal operations by relying on trusted workforce and IT management tools rather than custom malware.

Recent campaigns have repurposed ‘Net Monitor for Employees Professional’ and ‘SimpleHelp’, tools usually used for staff oversight and remote support. Screen viewing, file management, and command features were exploited to control systems without triggering standard security alerts.

Researchers at Huntress identified the activity in early 2026, finding that the tools were used to maintain persistent, hidden access. Analysis showed that attackers were actively preparing compromised systems for follow-on attacks rather than limiting their activity to surveillance.

The access was later linked to attempts to deploy ‘Crazy’ ransomware and steal cryptocurrency, with intruders disguising the software as legitimate Microsoft services. Monitoring agents were often renamed to resemble standard cloud processes, thereby remaining active without attracting attention.

Huntress advised organisations to limit software installation rights, enforce multi-factor authentication, and audit networks for unauthorised management tools. Monitoring for antivirus tampering and suspicious program names remains critical for early detection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global coalition demands ban on AI-nudification tools over child-safety fears

More than 100 organisations have urged governments to outlaw AI-nudification tools after a surge in non-consensual digital images.

Groups such as Amnesty International, the European Commission, and Interpol argue that the technology now fuels harmful practices that undermine human dignity and child safety. Their concerns intensified after the Grok nudification scandal, where users created sexualised images from ordinary photographs.

Campaigners warn that the tools often target women and children instead of staying within any claimed adult-only environment. Millions of manipulated images have circulated across social platforms, with many linked to blackmail, coercion and child sexual abuse material.

Experts say the trauma caused by these AI images is no less serious because the abuse occurs online.

Organisations within the coalition maintain that tech companies already possess the ability to detect and block such material but have failed to apply essential safeguards.

They want developers and platforms to be held accountable and believe that strict prohibitions are now necessary to prevent further exploitation. Advocates argue that meaningful action is overdue and that protection of users must take precedence over commercial interests.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google acquisition of Wiz cleared under EU merger rules

The European Commission has unconditionally approved Google’s proposed acquisition of Wiz under the EU Merger Regulation, concluding that the deal raises no competition concerns in the European Economic Area.

The assessment focused on the fast-growing cloud security market, where both companies are active. Google provides cloud infrastructure and security services via Google Cloud Platform, while Wiz offers a cloud-native application protection platform for multi-cloud environments.

Regulators examined whether Google could restrict competition by bundling Wiz’s tools or limiting interoperability with rival cloud providers. The market investigation found customers would retain access to credible alternatives and could switch suppliers if needed.

The Commission also considered whether the acquisition would give Google access to commercially sensitive data relating to competing cloud infrastructure providers. Feedback from customers and rivals indicated that the data involved is not sensitive and is generally accessible to other cloud security firms.

Based on these findings, the Commission concluded that the transaction would not significantly impede effective competition in any relevant market. The deal was therefore cleared unconditionally following a Phase I review.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!