OpenAI accelerates enterprise AI growth after Gartner names it an emerging leader

The US tech firm, OpenAI, gained fresh momentum after being named an Emerging Leader in Generative AI by Gartner. The assessment highlights strong industry confidence in OpenAI’s ability to support companies that want reliable and scalable AI systems.

Enterprise clients have increasingly adopted the company’s tools after significant investment in privacy controls, data governance frameworks and evaluation methods that help organisations deploy AI safely.

More than one million companies now use OpenAI’s technology, driven by workers who request ChatGPT as part of their daily tasks.

Over eight hundred million weekly users arrive already familiar with the tool, which shortens pilot phases and improves returns, rather than slowing transformation with lengthy onboarding. ChatGPT Enterprise has experienced sharp expansion, recording ninefold growth in seats over the past year.

OpenAI views generative AI as a new layer of enterprise infrastructure rather than a peripheral experiment. The next generation of systems is expected to be more collaborative and closely integrated with corporate operations, supporting new ways of working across multiple sectors.

The company aims to help organisations convert AI strategies into measurable results, rather than abstract ambitions.

Executives described the recognition as encouraging, although they stressed that broader progress still lies ahead. OpenAI plans to continue strengthening its enterprise platform, enabling businesses to integrate AI responsibly and at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Report calls for new regulations as AI deepfakes threaten legal evidence

US courtrooms increasingly depend on video evidence, yet researchers warn that the legal system is unprepared for an era in which AI can fabricate convincing scenes.

A new report led by the University of Colorado Boulder argues that national standards are urgently needed to guide how courts assess footage generated or enhanced by emerging technologies.

The authors note that judges and jurors receive little training on evaluating altered clips, despite more than 80 percent of cases involving some form of video.

Concerns have grown as deepfakes become easier to produce. A civil case in California collapsed in September after a judge ruled that a witness video was fabricated, and researchers believe such incidents will rise as tools like Sora 2 allow users to create persuasive simulations in moments.

Experts also warn about the spread of the so-called deepfake defence, where lawyers attempt to cast doubt on genuine recordings instead of accepting what is shown.

AI is also increasingly used to clean up real footage and to match surveillance clips with suspects. Such techniques can improve clarity, yet they also risk deepening inequalities when only some parties can afford to use them.

High-profile errors linked to facial recognition have already led to wrongful arrests, reinforcing the need for more explicit courtroom rules.

The report calls for specialised judicial training, new systems for storing and retrieving video evidence and stronger safeguards that help viewers identify manipulated content without compromising whistleblowers.

Researchers hope the findings prompt legal reforms that place scientific rigour at the centre of how courts treat digital evidence as it shifts further into an AI-driven era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SAP unveils new models and tools shaping enterprise AI

The German multinational software company, SAP, used its TechEd event in Berlin to reveal a significant expansion of its Business AI portfolio, signalling a decisive shift toward an AI-native future across its suite.

The company expects to deliver 400 AI use cases by the end of 2025, building on more than 300 already in place.

It also argues that its early use cases already generate substantial returns, offering meaningful value for firms seeking operational gains instead of incremental upgrades.

A firm that places AI-native architecture at the centre of its strategy. SAP HANA Cloud now supports richer model grounding through multi-model engines, long-term agentic memory, and automated knowledge graph creation.

SAP aims to integrate these tools with SAP Business Data Cloud and Snowflake through zero-copy data sharing next year.

The introduction of SAP-RPT-1, a new relational foundation model designed for structured enterprise data rather than general language tasks, is presented as a significant step toward improving prediction accuracy across finance, supply chains, and customer analytics.

SAP also seeks to empower developers through a mix of low-code and pro-code tools, allowing companies to design and orchestrate their own Joule Agents.

Agent governance is strengthened through the LeanIX agent hub. At the same time, new interoperability efforts based on the agent-to-agent protocol are expected to enable SAP systems to work more smoothly with models and agents from major partners, including AWS, Google, Microsoft, and ServiceNow.

Improvements in ABAP development, including the introduction of SAP-ABAP-1 and a new Visual Studio Code extension, aim to support developers who prefer modern, AI-enabled workflows over older, siloed environments.

Physical AI also takes a prominent role. SAP demonstrated how Joule Agents already operate inside autonomous robots for tasks linked to logistics, field services, and asset performance.

Plans extend from embodied AI to quantum-ready business algorithms designed to enhance complex decision-making without forcing companies to re-platform.

SAP frames the overall strategy as a means to support Europe’s digital sovereignty, which is strengthened through expanded infrastructure in Germany and cooperation with Deutsche Telekom under the Industrial AI Cloud project.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfakes surge as scammers exploit AI video tools

Experts warn online video is entering a perilous new phase as AI deepfakes spread. Analysts say totals climbed from roughly 500,000 in 2023 to eight million in 2025.

Security researchers say deepfake scams have risen by more than 3,000 percent recently. Studies also indicate humans correctly spot high-quality fakes only around one in four times. People are urged to question surprising clips, verify stories elsewhere and trust their instincts.

Video apps such as Sora 2 create lifelike clips that fraudsters reuse for scams. Sora passed one million downloads and later tightened rules after racist deepfakes of Martin Luther King Jr.

Specialists at Outplayed suggest checking eye blinks, mouth movements and hands for subtle distortions. Inconsistent lighting, unnaturally smooth skin or glitching backgrounds can reveal manipulated or AI-generated video.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI expected to reshape 89% of jobs across the workforce in 2026

AI is set to transform the UK workforce in 2026, with nearly 9 out of 10 senior HR leaders expecting AI to reshape jobs, according to a CNBC survey. The survey highlights a shift towards skill-based, AI-enabled recruitment rather than traditional degree-focused hiring.

Despite the widespread adoption of AI, workforce reductions are expected to stem mainly from general cost-cutting rather than efficiency gains. Many HR leaders also noted that while AI has improved efficiency and innovation, it has not yet been fully integrated into every job, resulting in uneven impact across organisations.

The research highlights the potential of AI to boost productivity and innovation, with studies indicating that employees can save an average of 7.5 hours per week by utilising AI tools. HR experts emphasised that learning to use AI to augment human interactions, rather than replace them, will be crucial for the workforce’s future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Vatican gathers global experts on AI and medicine

Medical professionals, ethicists and theologians gathered in the Vatican this week to discuss the ethical use of AI in healthcare. The conference, organised by the Pontifical Academy for Life and the International Federation of Catholic Medical Associations, highlighted the growing role of AI in diagnostics and treatment.

Speakers warned against reducing patient care to data alone, stressing that human interaction and personalised treatment remain central to medicine. Experts highlighted the need for transparency, non-discrimination and ethical oversight when implementing AI, noting that technology should enhance rather than replace human judgement.

The event also explored global experiences from regions including India, Latin America and Europe, with participants emphasising the role of citizens in shaping AI’s direction in medicine. Organisers called for ongoing dialogue between healthcare professionals, faith communities and technology leaders to ensure AI benefits patients while safeguarding human dignity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hyundai launches record investment to boost South Korea’s tech future

Hyundai Motor Group has unveiled a record 85.8 billion dollar investment plan that will reshape South Korea’s industrial landscape over the next five years.

The company intends to channel a large share of the funds into fields such as AI, robotics, electrification, software-defined vehicles, and hydrogen technologies.

Hyundai presents the roadmap as evidence of an agile response to a global environment in which export strength and technological leadership matter more than ever.

A major part of the strategy centres on turning innovation into export gains. The group expects the investment to raise overseas shipments of South Korea-made vehicles by more than thirteen percent by 2030.

A plan that emerges shortly after Seoul concluded a new trade agreement with Washington that lowers tariffs on South Korean vehicles to fifteen percent instead of the previous twenty-five percent. The rate remains much higher than the earlier 2.5 percent applied before the renegotiation.

Hyundai’s announcement mirrors a wider industrial push across the country. Samsung Group recently committed 310 billion dollars for a similar period, largely focused on AI development.

Both companies aim to reinforce the nation’s position in advanced technologies and secure long-term competitiveness at a time when global supply chains and industrial alliances are rapidly shifting.

Hyundai, together with Kia, sold more than 7.2 million vehicles globally last year.

The company views its new investment programme as a foundation for future export growth and a signal that South Korea plans to anchor its economic future in next-generation technologies instead of relying on past models of industrial expansion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google commits 40 billion dollars to expand Texas AI infrastructure

Google will pour 40 billion dollars into Texas by 2027, expanding digital infrastructure. Funding focuses on new cloud and AI facilities alongside existing campuses in Midlothian and Dallas.

Three new US data centres are planned, one in Armstrong County and two in Haskell County. One Haskell site will sit beside a solar plant and battery storage facility. Investment is accompanied by agreements for more than 6,200 megawatts of additional power generation.

Google will create a 30 million dollar Energy Impact Fund supporting Texan energy efficiency and affordability projects. The company backs training for existing electricians and over 1,700 apprentices through electrical training programmes.

Spending strengthens Texas as a major hub for data centres and AI development. Google says expanded infrastructure and workforce will help maintain US leadership in advanced computing technologies. Company highlights its 15 year presence in Texas and pledges ongoing community support.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New blueprint ensures fair AI in democratic processes

A rights-centred AI blueprint highlights the growing use of AI in analysing citizen submissions during public participation, promising efficiency but raising questions about fairness, transparency and human rights. Experts caution that poorly designed AI could silence minority voices, deepen inequalities and weaken trust in democratic decision-making.

The European Centre for Not-for-Profit Law (ECNL) provides detailed guidance for governments, civil society organisations and technology developers on how to implement AI responsibly. Recommendations include conducting human rights impact assessments, involving marginalised communities from the design stage, testing AI accuracy across demographics, and ensuring meaningful human oversight at every stage.

Transparency and accountability are key pillars of the framework, providing guidance on publishing assessments, documenting AI decision-making processes, and mitigating bias. Experts stress that efficiency gains should never come at the expense of inclusiveness, and that AI tools must be monitored and updated continually to reflect community feedback and rights considerations.

The blueprint also emphasises collaboration and sustainability, urging multistakeholder governance, civil society co-design, and ongoing training for public servants and developers. By prioritising rights, transparency and community engagement, AI in public participation can enhance citizen voices rather than undermining them, but only if implemented deliberately and inclusively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI supports doctors in spotting broken bones

Hospitals in Lincolnshire, UK, are introducing AI to assist doctors in identifying fractures and dislocations, with the aim to speeding up treatment and improving patient care. The Northern Lincolnshire and Goole NHS Foundation Trust will launch a two-year NHS England pilot later this month.

AI software will provide near-instant annotated X-rays alongside standard scans, highlighting potential issues for clinicians to review. Patients under the age of two, as well as those undergoing chest, spine, skull, facial or soft tissue imaging, will not be included in the pilot.

Consultants emphasise that AI is an additional tool, not a replacement, and clinicians will retain the final say on diagnosis and treatment. Early trials in northern Europe suggest the technology can help meet rising demand, and the trust is monitoring its impact closely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot