Wikipedia-based AI model identifies 100 emerging technologies to watch in 2026

A new analysis by Australian researchers reveals how AI is reshaping the way emerging technologies are identified and tracked.

Using a dataset derived from thousands of Wikipedia entries, the researchers mapped more than 23,000 technologies to produce the ‘Momentum 100’ list, highlighting the fastest-growing technologies across science and industry.

The findings place reinforcement learning at the top, followed closely by blockchain and other rapidly advancing fields such as 3D printing, soft robotics and augmented reality.

These technologies reflect a broader shift towards data-driven innovation, where systems capable of learning, adapting and scaling are becoming central to both research and commercial applications.

Unlike traditional forecasts, which often rely on expert judgement, the model uses large-scale data analysis to detect patterns of growth and interconnection between technologies.

The approach offers a more dynamic and repeatable method, capturing early signals that might otherwise be overlooked in manual assessments.

Despite its advantages, researchers caution that predicting real-world impact remains difficult at early stages.

While AI-driven mapping provides valuable insights, policymakers and industry leaders still rely on hybrid approaches that combine data analysis with expert evaluation, as seen in frameworks developed by organisations such as the World Economic Forum.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

GPT-5.5 pushes AI deeper into agentic work

OpenAI has released GPT-5.5 as its latest push towards more capable agentic AI, presenting the model as better suited to complex, multi-step digital work across coding, research, analysis, and enterprise tasks.

The company frames it as a system designed to carry more of the work itself, moving beyond isolated prompt-response interactions towards fuller execution across digital workflows.

According to OpenAI, the model’s biggest gains are in software engineering, tool use, and knowledge work. GPT-5.5 improves performance on coding and workflow benchmarks, strengthens long-horizon reasoning, and handles complex digital tasks with greater efficiency while maintaining earlier latency standards.

OpenAI also says the model performs better across documents, spreadsheets, presentations, and data analysis, reflecting a broader effort to make AI more useful across full professional workflows rather than only as an assistant for isolated tasks.

The release also highlights stronger performance in scientific and technical research, alongside expanded safety testing and tighter safeguards for higher-risk capabilities.

The wider significance of GPT-5.5 lies in its reflection of the next phase of AI competition. The focus is shifting from better answers to more reliable execution across real-world digital work, with growing implications for productivity, oversight, and governance.

Why does it matter? 

GPT-5.5 signals a shift from AI as a passive tool to AI as an active digital operator that can complete full workflows across coding, research, and business systems with minimal human supervision.

Over time, such capability could reshape productivity, speed up development cycles, and shift competitive advantage toward those best integrating autonomous AI while managing safety and governance risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

World Economic Forum analysis explains what drives startup growth today

Findings from the World Economic Forum (WEF) highlight a shift in how early-stage ventures grow from pilot projects into fully operational businesses.

Evidence gathered from more than 200 start-ups by UpLink, the early-stage innovation initiative by WEF, alongside investors and policymakers, suggests that scaling no longer depends primarily on innovation itself, but on the conditions enabling deployment.

Core and emerging technologies already exist across sectors, yet barriers remain in market adoption, coordination, and institutional readiness.

Resilience has moved from a strategic ambition to an immediate operational requirement. Start-ups are increasingly built around urgent, clearly defined problems, allowing them to adapt quickly in volatile environments shaped by geopolitical tensions, supply chain disruption, and climate pressures.

Strong partnerships have emerged as a central priority, with a significant majority of ventures seeking collaboration with larger corporate actors to gain access to infrastructure, regulatory pathways, and credibility.

Collaboration at early stages is proving essential in reducing risk and accelerating adoption. Traditional scaling models, based on proving technology before securing buyers, are losing effectiveness in complex sectors with high institutional risk.

Shared responsibility across multiple stakeholders enables innovation to move beyond demonstration phases into real-world application, particularly when aligned with procurement systems and regulatory frameworks.

Commercial viability has also become central to scaling success. Impact alone is no longer sufficient, as investors and buyers increasingly prioritise measurable financial outcomes such as cost efficiency, risk reduction, and resilience.

Market signals, including early contracts and partnerships, now outweigh funding rounds as indicators of credibility.

Why does it matter?

The WEF analysis underscores that scalable growth depends less on innovation alone and more on coordinated ecosystems that turn pilots into real-world adoption.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Crypto derivatives rules face overhaul in Thailand consultation

Thailand is moving to simplify access to crypto derivatives markets through proposed regulatory changes aimed at reducing operational barriers for digital asset firms. The Securities and Exchange Commission of Thailand has opened a consultation on letting licensed crypto firms access derivatives without separate corporate entities. 

Current regulations require firms to operate distinct legal structures for derivatives activity, increasing compliance costs and limiting market expansion. The proposed framework consolidates licensing under a single regulatory umbrella while maintaining oversight through internal controls and conflict management rules. 

The reform reflects a broader international shift towards integrating crypto and traditional financial markets within unified trading environments. Similar momentum is visible in the United States, where discussions on crypto perpetual futures are advancing alongside increased institutional activity in derivatives infrastructure.

Market activity is already responding to anticipated changes, including acquisitions of regulated trading platforms to support expanded product offerings. These developments indicate growing alignment between regulatory evolution and industry expansion in digital asset derivatives markets.

Why does it matter? 

These changes represent a broader move toward integrating crypto and traditional markets under unified regulatory frameworks. Reducing structural barriers may improve efficiency and innovation while preserving oversight.

Parallel developments across key jurisdictions also point to growing global competition to set standards for crypto derivatives, with implications for liquidity, access, and institutional participation worldwide. 

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

YouTube expands AI deepfake detection tools for celebrities

The expansion of its likeness detection technology to the entertainment industry has been announced by YouTube, extending access beyond content creators to talent agencies, management companies and the individuals they represent.

The move is part of a broader effort by the platform to address the growing misuse of AI to generate misleading or unauthorised videos of public figures. By extending the tool to entertainment industry stakeholders, YouTube is signalling that AI-driven impersonation is no longer treated as a niche creator issue but as a broader identity and rights problem.

The system works in a way broadly comparable to Content ID, allowing eligible users to identify videos that use AI to replicate a person’s face or likeness. Once such content is detected, individuals can request its removal through YouTube’s existing privacy complaint process.

The rollout has been developed with input from major industry players, including Creative Artists Agency, United Talent Agency, William Morris Endeavor, and Untitled Management. Those partnerships are intended to help YouTube refine how the system works in practice and ensure it reflects the needs of artists and rights holders dealing with synthetic media.

Importantly, access to the tool is not limited to people who actively run YouTube channels. Celebrities and public figures can use it even without a direct creator presence on the platform, extending its reach across a much broader part of the entertainment ecosystem.

The significance of the update lies in how platforms are beginning to treat AI impersonation as a governance issue rather than merely a content-moderation problem.

As synthetic media tools become easier to use and more convincing, technology companies are under growing pressure to provide faster and more credible mechanisms for detecting misuse, protecting identity rights, and limiting deceptive content.

YouTube’s latest move shows that platform responses are becoming more structured and rights-based, especially in sectors where a person’s likeness is closely tied to reputation, image, and commercial value. The bigger question now is whether such tools will prove effective enough to keep pace with the scale and speed of AI-generated impersonation online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Australian regulator highlights rising AI use across various industries

The Australian Communications and Media Authority reports that AI use is accelerating across telecommunications, media and online gambling sectors. The findings highlight growing adoption alongside increasing complexity in how the technology is applied.

According to the Authority, AI is being used in media to personalise advertising and streamline content production. However, concerns have been raised about misinformation risks and the use of copyrighted material.

In the gambling sector, AI supports predictive analytics, promotions and detection of harmful behaviour, while telecommunications companies use it to improve efficiency, detect scams and strengthen network resilience.

The Authority states that despite efficiency gains, stakeholders are calling for stronger governance, transparency and safeguards as AI adoption expands in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK regulator selects firms for second cohort of AI testing programme in financial services

The Financial Conduct Authority (FCA) has selected eight firms to join the second cohort of its AI Live Testing programme, with trials beginning in April 2026. The announcement was made at UK FinTech Week.

The initiative allows participants to test AI applications under regulatory oversight, with a focus on risk management and live monitoring. FCA is working with AI assurance specialist Advai to support the deployment of systems across financial markets.

Jessica Rusu, chief data, information and intelligence officer at FCA, said the programme reflects collaboration between regulators and industry. She added that FCA continues to work with firms to support the safe and responsible development of AI in UK financial markets.

The second cohort includes Barclays, Experian, Lloyds Banking Group, UBS, Aereve, Coadjute, GoCardless and Palindrome. FCA noted that use cases include targeted investment support, credit scoring insights, anti-money laundering detection and agentic payments.

FCA will also use the programme to examine emerging concepts, such as targeted support, a lighter-touch regulatory category aimed at addressing the UK’s advice gap. It reported that applications to its innovation services, including the Regulatory Sandbox and Innovation Pathways, increased by 49 percent year on year. A report on AI adoption practices is expected later in 2026, with a full evaluation of the cohort due in 2027.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

EU launches protected data register

The European Commission has introduced a European Register of protected data to improve access to public sector information. The initiative is presented through the data.europa.eu platform as part of wider data-sharing efforts.

According to the Commission, the register provides a central point for discovering protected data held by public authorities. It is designed to make such datasets more visible and easier to locate.

The platform helps users identify conditions under which protected data can be accessed and reused. This includes guidance on legal and technical requirements linked to sensitive datasets.

The European Commission states that the register aims to strengthen transparency and data-driven innovation while supporting access to public sector information across the European Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Saudi Arabia pilots global blueprint for quantum economy development

Saudi Arabia has become the first country to pilot the World Economic Forum’s Quantum Economy Blueprint, applying a global strategic framework to its national innovation agenda.

The initiative, led by the Centre for the Fourth Industrial Revolution Saudi Arabia, aims to align emerging quantum technologies with the long-term development goals outlined in Vision 2030.

The pilot, based on analysis of 24 national quantum strategies and input from global organisations, translated theoretical guidance into practical policy workstreams.

It highlighted how quantum initiatives gain stronger traction when embedded within broader national priorities, such as economic diversification and technological leadership, rather than being treated as isolated research efforts.

Five key lessons emerged from the exercise. These include the importance of linking research to commercial applications, ensuring flexible access to quantum hardware through partnerships and cloud systems, and building strong collaboration between government, academia, and industry.

The findings also underline that talent development is central to competitiveness, extending beyond scientists to engineers, policymakers, and business specialists.

The experience suggests that countries do not need full ownership of quantum infrastructure to participate in the sector, but can instead rely on strategic access models and ecosystem cooperation.

Saudi Arabia’s pilot demonstrates how global frameworks can be adapted into national action, offering a model for other countries developing their quantum strategies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

DIFC unveils plan to build ‘AI-native’ financial centre in Dubai

Dubai International Financial Centre has announced plans to become what it describes as the world’s first ‘AI-native’ financial centre, embedding AI into regulation, business operations, and physical infrastructure rather than treating it as a stand-alone tool.

The initiative is being presented as a broader redesign of how a financial centre functions. Instead of limiting AI to back-office support or isolated digital services, DIFC says it wants AI to shape legal frameworks, compliance processes, client management, and the wider operation of the financial ecosystem.

The plan builds on DIFC’s longer-term AI strategy, launched in 2023 and already tied to changes in data governance and the centre’s wider innovation agenda.

According to DIFC, AI is already being used in areas such as compliance and client services, with further expansion planned across financial workflows, supervisory processes, and institutional decision-making.

DIFC also says the initiative will be supported by a broader ecosystem designed to attract investment, talent, and experimentation. That includes training programmes, venture support, accelerators, and the continued development of its AI-focused innovation infrastructure. The aim is not only to encourage firms to use AI, but to make Dubai a base for building and scaling AI-driven financial services.

The project also extends beyond software and regulation. DIFC says physical infrastructure will evolve alongside digital systems, with plans linked to smart buildings, robotics, autonomous mobility, and digital twins by the end of the decade.

That gives the announcement a broader urban and economic dimension, positioning AI as part of the district’s future design rather than simply a tool used by firms within it.

The broader significance of the move lies in how Dubai is trying to position itself in the global race to shape AI in finance. Rather than focusing only on innovation-friendly rhetoric, DIFC is presenting regulation, infrastructure, skills, and ecosystem-building as part of a single strategy.

If realised in practice, that could strengthen Dubai’s role as a hub for AI-driven financial services and as a testing ground for new governance models.

At the same time, the claim to be the world’s first ‘AI-native’ financial centre should be understood as DIFC’s own description of the project, rather than an independently established category.

The more solid story is that Dubai is trying to make AI part of the operating logic of a financial centre itself, using policy, infrastructure, and investment to support that ambition.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!