OpenAI plans to integrate Sora video generation into ChatGPT

According to reports, OpenAI is preparing to integrate its AI video generator Sora directly into ChatGPT, a move that could expand the platform’s capabilities beyond text and image generation.

Sora currently operates as a standalone application and web service. Integrating the tool into ChatGPT could dramatically increase its visibility and usage, particularly given the chatbot’s massive global user base.

The company released an updated version of the model in 2025 that allows users to create, remix and even appear inside AI-generated videos. Bringing those features into ChatGPT would represent a major step toward making video generation a mainstream function within conversational AI systems.

Competition in the generative video market is intensifying. Companies, including Google, are developing similar technologies, with the company’s Gemini platform offering video creation powered by the Veo system. Other developers are also launching text-to-video models as the field rapidly expands.

Despite the potential growth, integrating video generation into ChatGPT may significantly increase operating costs. Running large AI systems requires vast computing resources and energy, and the chatbot already costs billions of dollars annually to operate.

Although OpenAI earns revenue from subscriptions, the majority of ChatGPT users currently use the free version. The company is therefore exploring additional monetisation strategies, including advertising and new premium services.

Integrating Sora into ChatGPT could therefore serve both strategic and financial goals, strengthening the platform’s position in the competitive generative AI market while expanding the types of content users can create.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI tools encourage exploration in creative tasks

AI is often associated with automation and job replacement, yet new research from Swansea University suggests a different role. Findings indicate that AI can serve as a creative collaborator, encouraging exploration and deeper engagement during design tasks.

Researchers from the university’s Computer Science Department ran an experiment with over 800 participants using an AI-supported system to design virtual cars.

Rather than optimising results, the system generated galleries with varied design ideas, including effective, unusual, and intentionally flawed concepts.

According to lead researcher Sean Walton, exposure to AI-generated suggestions increased participants’ involvement. Many users spent longer working on the task and produced stronger designs after interacting with the system’s diverse proposals.

The study in ACM Transactions on Interactive Intelligent Systems argues that traditional methods for evaluating AI tools are too narrow. Researchers believe broader assessments are needed to measure how AI affects human thinking, emotions, and creative exploration.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI technology set to reshape farming and rural life in South Korea

South Korea has launched a national agenda to expand AI across agriculture, aiming to boost productivity and improve living standards in rural communities. Officials from the Ministry of Agriculture, Food and Rural Affairs and the Ministry of Science and ICT presented the strategy as part of a wider digital transformation effort.

Plans include expanding smart farm models that reduce labour-intensive tasks and allow more farmers to benefit from automated technologies. Shared machinery centres and autonomous farming tools such as drones will be developed with support from the Rural Development Administration.

Authorities also intend to apply AI to agricultural distribution through smart logistics facilities that manage receiving, sorting and shipping processes. Around 300 smart Agricultural Products Processing Centres are expected to operate nationwide by 2030.

Livestock grading systems using AI will be introduced to improve accuracy and consumer trust across pork and beef processing facilities. Officials aim to raise the share of AI-graded meat from 19.4 percent in 2025 to 70 percent by 2030.

Beyond production, the programme seeks to expand ‘smart rural communities’ offering AI-based services such as transport, daily living support and farming assistance. Policymakers believe that a stronger digital infrastructure will help rural regions respond to climate pressures and an ageing population.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Writer files lawsuit against Grammarly over AI feature using experts’ identities

A journalist has filed a class action lawsuit against Grammarly after the company introduced an AI feature that generated editorial feedback by imitating well-known writers and public figures without their permission.

The legal complaint was submitted by investigative journalist Julia Angwin, who argued that the tool unlawfully used the identities and reputations of authors and commentators.

The feature, known as ‘Expert Review’, produced automated critiques presented as if they came from figures such as Stephen King, Carl Sagan and technology journalist Kara Swisher.

Such a feature was available to subscribers paying an annual fee and was designed to simulate professional editorial guidance.

Critics quickly questioned both the quality of the generated feedback and the decision to associate the tool with real individuals who had not authorised the use of their names or expertise.

Technology writer Casey Newton tested the system by submitting one of his own articles and receiving automated feedback attributed to an AI version of Swisher. The response appeared generic, casting doubt on the value of linking such commentary to prominent personalities.

Following criticism from writers and researchers, the feature was disabled. Shishir Mehrotra, chief executive of Grammarly’s parent company Superhuman, issued a public apology while defending the broader concept behind the tool.

The lawsuit reflects growing tensions around AI systems that replicate creative styles or professional expertise.

As generative AI technologies expand across writing and publishing industries, questions surrounding consent, intellectual labour and identity rights are becoming increasingly prominent.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU lawmakers move forward on AI Act changes

Members of the European Parliament have reached a preliminary political agreement on amendments to the EU Artificial Intelligence Act. The compromise will be reviewed by parliamentary committees before a scheduled vote in Brussels.

Lawmakers in the EU agreed to extend compliance deadlines for some high risk AI systems. The changes aim to give companies and regulators more time to prepare technical standards and enforcement frameworks.

The proposed amendments also include a ban on AI systems that create non consensual explicit deepfakes. Officials in the EU say the measure aims to strengthen consumer protection and improve online safety for children.

Industry groups in the EU have raised concerns about compliance burdens linked to the revised rules. Policymakers in the EU continue negotiations as the legislation moves toward committee approval.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New venture aims to build AI that understands the real world

AI pioneer Yann LeCun has secured more than $1 billion in funding for a new startup that aims to rethink how AI systems learn about the world.

The venture, called Advanced Machine Intelligence (AMI), will focus on developing AI that learns from real-world signals, such as camera and sensor data, rather than relying primarily on text. According to this French company, such systems could make better decisions by understanding how events unfold in the physical world.

AMI plans to build what researchers call ‘world models’, AI systems designed to predict the consequences of actions before they happen. Developers believe that grounding AI in real-world data could make the technology more reliable and easier to control, especially in critical safety applications.

Operations will span several global research hubs, including Paris, New York City, Montreal and Singapore. The company has already begun assembling its leadership team, appointing entrepreneur Alex LeBrun as chief executive and AI researcher Saining Xie as chief science officer.

Support for the project quickly appeared online. Emmanuel Macron welcomed the launch, saying it represented a new chapter in AI and highlighting the role of researchers and innovators in shaping the technology’s future.

LeCun is widely regarded as one of the key figures behind modern AI. In 2018, he shared the prestigious Turing Award with fellow researchers Geoffrey Hinton and Yoshua Bengio for their contributions to deep learning.

Research at AMI will focus on building AI systems that can reason, plan actions and maintain long-term memory. Possible applications range from robotics and industrial automation to healthcare and wearable technologies, areas where dependable AI could have a major impact.

LeCun and his team argue that genuine intelligence cannot emerge from language alone. Understanding the world, they say, requires machines that learn directly from it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK watchdog demands stronger child safety on social platforms

The British communications regulator Ofcom has called on major technology companies to enforce stricter age controls and improve safety protections for children using online platforms.

The warning targets services widely used by young audiences, including Facebook, Instagram, Roblox, Snapchat, TikTok and YouTube.

Regulators said that despite existing minimum age policies, large numbers of children under the age of 13 continue to access platforms intended for older users.

According to Ofcom research, more than 70 percent of children aged 8 to 12 regularly use such services.

Authorities have asked companies to demonstrate how they will strengthen protections and ensure compliance with minimum age requirements.

Platforms must present their plans by 30 April, after which Ofcom will publish an assessment of their responses and determine whether further regulatory action is necessary.

The regulator also outlined several key areas requiring improvement.

Companies in the UK are expected to implement more effective age-verification systems, strengthen protections against online grooming and ensure that recommendation algorithms do not expose children to harmful content.

Another concern involves product development practices.

Ofcom warned that new digital features, including AI tools, should not be tested on children without adequate safety assessments. Platforms are required to evaluate potential risks before launching significant updates.

The measures are part of the UK’s broader regulatory framework introduced under the Online Safety Act, which aims to reduce exposure to harmful online material.

The law requires platforms to prevent children from accessing content linked to pornography, suicide, self-harm and eating disorders, while limiting the promotion of violent or abusive material in recommendation feeds.

Ofcom indicated that enforcement action may follow if companies fail to demonstrate meaningful improvements. Regulators argue that stronger safeguards are necessary to restore public trust and ensure that digital platforms prioritise child safety in their design and operation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU competition regulators expand scrutiny across the entire AI ecosystem

Competition authorities in the EU are broadening their oversight of the AI sector, examining every layer of the technology’s value chain.

Speaking at a conference in Berlin, Teresa Ribera explained that regulators are analysing the full ‘AI stack’ instead of focusing solely on consumer applications.

According to the competition chief, scrutiny extends beyond visible AI tools to the systems that support them. Investigations are assessing underlying models, the data used to train those models, as well as cloud infrastructure and energy resources that power AI systems.

Regulatory attention has already reached the application layer.

The European Commission opened an investigation in 2025 involving Meta after concerns emerged that the company could restrict competing AI assistants on its messaging platform WhatsApp.

Following regulatory pressure, Meta proposed allowing rival AI chatbots on the platform in exchange for a fee. European regulators are now assessing the proposal to determine whether additional intervention is necessary to preserve fair competition in rapidly evolving digital markets.

Authorities have also examined concentration risks across other parts of the AI ecosystem, including the infrastructure layer dominated by companies such as Nvidia.

Regulators argue that effective competition oversight must address the entire technology stack as AI markets expand quickly.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI is helping close the heart health gap in remote Australian communities

Google has launched a new AI-powered initiative aimed at reducing heart disease risk in rural Australia, where people living in remote communities are 60% more likely to die from heart disease than those in metropolitan areas.

The programme, a first for the Asia-Pacific region, is backed by a $1 million AUD investment from Google Australia’s Digital Future Initiative and brings together Wesfarmers Health, SISU Health, the Victor Chang Cardiac Research Institute, and Latrobe Health Services.

At the centre of the initiative is Google for Health’s Population Health AI (PHAI), an advanced analytics tool that analyses aggregated and de-identified datasets, including clinical records, air quality, pollen levels, and geographic data, to identify hidden health risks at a community level.

The aim is to help health organisations move away from reactive treatment towards proactively managing chronic condition risks tailored to specific towns or postcodes.

SISU Health will use PHAI insights to guide the delivery of over 50,000 new health screenings across remote areas, combining geographic AI analysis with on-the-ground community care. Google described the goal as ensuring every Australian has access to personalised care regardless of where they live.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI in precision oncology faces a trust and safety challenge

A narrative review published in the Journal of Hematology & Oncology examined how generative AI tools could support oncologists in precision cancer care.

In this increasingly data-intensive field, clinicians must cross-reference genomic sequencing results, patient records, imaging findings, and a rapidly expanding body of biomedical literature to inform their decisions.

Researchers found promising results for AI-assisted clinical trial matching and diagnostic report drafting, but also highlighted significant risks that make unsupervised deployment dangerous.

On the positive side, the AI tool TrialGPT demonstrated 87.3% agreement with expert assessments when matching patients to clinical trials, while reducing processing time by an average of 42.6%.

Meanwhile, the vision-language model Flamingo-CXR matched or exceeded the performance of board-certified radiologists in 94% of chest X-ray cases with no clinically relevant findings.

Researchers cautioned, however, that clinically significant errors appeared in 24.8% of evaluated imaging reports, whether AI- or human-generated, underscoring the need for combined oversight.

The review’s authors advocate for ‘Human-in-the-Loop’ workflows, in which human experts review all AI outputs before clinical implementation, and for Retrieval-Augmented Generation techniques that force AI systems to draw on current medical guidelines rather than relying solely on their base training data.

The key conclusion is that AI should function as an assistant to oncologists, not as an autonomous decision maker.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!