The US tech firm, OpenAI, gained fresh momentum after being named an Emerging Leader in Generative AI by Gartner. The assessment highlights strong industry confidence in OpenAI’s ability to support companies that want reliable and scalable AI systems.
Enterprise clients have increasingly adopted the company’s tools after significant investment in privacy controls, data governance frameworks and evaluation methods that help organisations deploy AI safely.
More than one million companies now use OpenAI’s technology, driven by workers who request ChatGPT as part of their daily tasks.
Over eight hundred million weekly users arrive already familiar with the tool, which shortens pilot phases and improves returns, rather than slowing transformation with lengthy onboarding. ChatGPT Enterprise has experienced sharp expansion, recording ninefold growth in seats over the past year.
OpenAI views generative AI as a new layer of enterprise infrastructure rather than a peripheral experiment. The next generation of systems is expected to be more collaborative and closely integrated with corporate operations, supporting new ways of working across multiple sectors.
The company aims to help organisations convert AI strategies into measurable results, rather than abstract ambitions.
Executives described the recognition as encouraging, although they stressed that broader progress still lies ahead. OpenAI plans to continue strengthening its enterprise platform, enabling businesses to integrate AI responsibly and at scale.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US courtrooms increasingly depend on video evidence, yet researchers warn that the legal system is unprepared for an era in which AI can fabricate convincing scenes.
A new report led by the University of Colorado Boulder argues that national standards are urgently needed to guide how courts assess footage generated or enhanced by emerging technologies.
The authors note that judges and jurors receive little training on evaluating altered clips, despite more than 80 percent of cases involving some form of video.
Concerns have grown as deepfakes become easier to produce. A civil case in California collapsed in September after a judge ruled that a witness video was fabricated, and researchers believe such incidents will rise as tools like Sora 2 allow users to create persuasive simulations in moments.
Experts also warn about the spread of the so-called deepfake defence, where lawyers attempt to cast doubt on genuine recordings instead of accepting what is shown.
AI is also increasingly used to clean up real footage and to match surveillance clips with suspects. Such techniques can improve clarity, yet they also risk deepening inequalities when only some parties can afford to use them.
High-profile errors linked to facial recognition have already led to wrongful arrests, reinforcing the need for more explicit courtroom rules.
The report calls for specialised judicial training, new systems for storing and retrieving video evidence and stronger safeguards that help viewers identify manipulated content without compromising whistleblowers.
Researchers hope the findings prompt legal reforms that place scientific rigour at the centre of how courts treat digital evidence as it shifts further into an AI-driven era.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Experts warn online video is entering a perilous new phase as AI deepfakes spread. Analysts say totals climbed from roughly 500,000 in 2023 to eight million in 2025.
Security researchers say deepfake scams have risen by more than 3,000 percent recently. Studies also indicate humans correctly spot high-quality fakes only around one in four times. People are urged to question surprising clips, verify stories elsewhere and trust their instincts.
Specialists at Outplayed suggest checking eye blinks, mouth movements and hands for subtle distortions. Inconsistent lighting, unnaturally smooth skin or glitching backgrounds can reveal manipulated or AI-generated video.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Medical professionals, ethicists and theologians gathered in the Vatican this week to discuss the ethical use of AI in healthcare. The conference, organised by the Pontifical Academy for Life and the International Federation of Catholic Medical Associations, highlighted the growing role of AI in diagnostics and treatment.
Speakers warned against reducing patient care to data alone, stressing that human interaction and personalised treatment remain central to medicine. Experts highlighted the need for transparency, non-discrimination and ethical oversight when implementing AI, noting that technology should enhance rather than replace human judgement.
The event also explored global experiences from regions including India, Latin America and Europe, with participants emphasising the role of citizens in shaping AI’s direction in medicine. Organisers called for ongoing dialogue between healthcare professionals, faith communities and technology leaders to ensure AI benefits patients while safeguarding human dignity.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A rights-centred AI blueprint highlights the growing use of AI in analysing citizen submissions during public participation, promising efficiency but raising questions about fairness, transparency and human rights. Experts caution that poorly designed AI could silence minority voices, deepen inequalities and weaken trust in democratic decision-making.
The European Centre for Not-for-Profit Law (ECNL) provides detailed guidance for governments, civil society organisations and technology developers on how to implement AI responsibly. Recommendations include conducting human rights impact assessments, involving marginalised communities from the design stage, testing AI accuracy across demographics, and ensuring meaningful human oversight at every stage.
Transparency and accountability are key pillars of the framework, providing guidance on publishing assessments, documenting AI decision-making processes, and mitigating bias. Experts stress that efficiency gains should never come at the expense of inclusiveness, and that AI tools must be monitored and updated continually to reflect community feedback and rights considerations.
The blueprint also emphasises collaboration and sustainability, urging multistakeholder governance, civil society co-design, and ongoing training for public servants and developers. By prioritising rights, transparency and community engagement, AI in public participation can enhance citizen voices rather than undermining them, but only if implemented deliberately and inclusively.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Eurofiber France has suffered a data breach affecting its internal ticket management system and ATE customer portal, reportedly discovered on 13 November. The incident allegedly involved unauthorised access via a software vulnerability, with the full extent still unclear.
Sources indicate that approximately 3,600 customers could be affected, including major French companies and public institutions. Reports suggest that some of the allegedly stolen data, ranging from documents to cloud configurations, may have appeared on the dark web for sale.
Eurofiber has emphasised that Dutch operations are not affected.
The company moved quickly to secure affected systems, increasing monitoring and collaborating with cybersecurity specialists to investigate the incident. The French privacy regulator, CNIL, has been informed, and Eurofiber states that it will continue to update customers as the investigation progresses.
Founded in 2000, Eurofiber provides fibre optic infrastructure across the Netherlands, Belgium, France, and Germany. Primarily owned by Antin Infrastructure Partners and partially by Dutch pension fund PGGM, the company remains operational while assessing the impact of the breach.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Disney faces intense criticism after CEO Bob Iger announced plans to allow AI-generated content on Disney+. The streaming service, known for its iconic hand-drawn animation, now risks alienating artists and fans who value traditional craftsmanship.
Iger said AI would offer Disney+ users more interactive experiences, including the creation and sharing of short-form content. The company plans to expand gaming on Disney+ by continuing its collaborations with Fortnite, as well as featuring characters from Star Wars and The Simpsons.
Artists and animators reacted sharply, warning that AI could lead to job losses and a flood of low-quality material. Social media users called for a boycott, emphasising that generative AI undermines the legacy of Disney’s animation and may drive subscribers away.
The backlash reflects broader industry concerns, as other studios, such as Illumination and DreamWorks, have also rejected the use of generative AI. Creators like Dana Terrace of The Owl House urged fans to support human artistry, backing the push to defend traditional animation from AI-generated content.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In the UK and other countries, teenagers continue to encounter harmful social media content, including posts about bullying, suicide and weapons, despite the Online Safety Act coming into effect in July.
A BBC investigation using test profiles revealed that some platforms continue to expose young users to concerning material, particularly on TikTok and YouTube.
The experiment, conducted with six fictional accounts aged 13 to 15, revealed differences in exposure between boys and girls.
While Instagram showed marked improvement, with no harmful content displayed during the latest test, TikTok users were repeatedly served posts about self-harm and abuse, and one YouTube profile encountered videos featuring weapons and animal harm.
Experts warned that changes will take time and urged parents to monitor their children’s online activity actively. They also recommended open conversations about content, the use of parental controls, and vigilance rather than relying solely on the new regulatory codes.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
eBay is deepening its investment in AI as part of a multi-year effort to revive the platform after years of stagnant growth.
The company, which saw renewed momentum during the pandemic, has launched five new AI features this year, including AI-generated shipping estimates, an AI shopping agent and a partnership with OpenAI.
Chief executive Jamie Iannone argues that eBay’s long history gives it an advantage in the AI era, citing decades of product listings, buyer behaviour data and more than two billion active listings. That data underpins tools such as the ‘magical listing’ feature, which automatically produces item descriptions from photos, and an AI assistant that answers buyer questions based on a listing’s details.
These tools are also aimed at unlocking supply: eBay says the average US household holds thousands of dollars’ worth of unused goods.
Analysts note that helping casual sellers overcome the friction of listing and photographing items could lift the company’s gross merchandise volume, which grew 10 percent in the most recent quarter.
AI is also reshaping the buyer experience. Around 70 percent of eBay transactions come from enthusiasts who already know how to navigate the platform. The new ‘eBay.ai’ tool is designed to help less experienced users by recommending products based on natural-language descriptions.
Despite this push, the platform still faces intense competition from Amazon, Google, Shein and emerging AI-shopping agents. Iannone has hinted that eBay may integrate with external systems such as OpenAI’s instant-checkout tools to broaden discovery beyond the platform.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has introduced a new group chat feature in its ChatGPT app, currently piloted across Japan, New Zealand, South Korea and Taiwan. The rollout aims to test how users will interact in multi-participant conversations with the AI.
The pilot enables Free, Plus, and Team users on both mobile and web platforms to start or join group chats of up to 20 participants, where ChatGPT can participate as a member.
Human-to-human messages do not count against AI usage quotas; usage only applies when the AI replies. Group creators remain in charge of membership; invite links are used for access, and additional safeguards are applied when participants under the age of 18 are present.
This development marks a significant pivot from one-on-one AI assistants toward collaborative workflows, messaging and shared decision-making.
From a digital policy and governance perspective, this new feature raises questions around privacy, data handling in group settings, the role of AI in multi-user contexts and how usage quotas or model performance might differ across plans.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!