Amazon’s Nova Reel can now generate two-minute AI videos

Amazon has enhanced its generative AI video tool, Nova Reel, with an update that allows for the creation of videos up to two minutes long.

The updated model, Nova Reel 1.1, supports multi-shot video generation with a consistent style and accepts detailed prompts of up to 4,000 characters.

A new feature called Multishot Manual gives users more creative control, combining images and short prompts to guide video composition. However, this mode supports up to 20 shots from a single 1280 x 720 image and a 512-character prompt, offering finer-tuned outputs.

Nova Reel is currently accessible through Amazon Web Services (AWS), including its Bedrock AI development suite, although developers must request access, which is automatically granted.

The model enters a competitive field dominated by OpenAI, Google, and others racing to lead in generative video AI.

Despite its growing capabilities, Amazon has not disclosed how the model was trained or the sources of its training data. Questions around intellectual property remain, but Amazon says it will shield customers from copyright claims through its indemnification policy.

For more information on these topics, visit diplomacy.edu.

DeepMind blocks staff from joining AI rivals

Google DeepMind is enforcing strict non-compete agreements in the United Kingdom, preventing employees from joining rival AI companies for up to a year. The length of the restriction depends on an employee’s seniority and involvement in key projects.

Some DeepMind staff, including those working on Google’s Gemini AI, are reportedly being paid not to work while their non-competes run. The policy comes as competition for AI talent intensifies worldwide.

Employees have voiced concern that these agreements could stall their careers in a rapidly evolving industry. Some are seeking ways around the restrictions, such as moving to countries with less rigid employment laws.

While DeepMind claims the contracts are standard for sensitive work, critics say they may stifle innovation and mobility. The practice remains legal in the UK, even though similar agreements have been banned in the US.

For more information on these topics, visit diplomacy.edu.

IBM unveils AI-powered mainframe z17

IBM has announced the launch of its most advanced mainframe yet, the z17, powered by the new Telum II processor. Designed to handle more AI operations, the system delivers up to 50% more daily inference tasks than its predecessor.

The z17 features a second-generation on-chip AI accelerator and introduces new tools for managing and securing enterprise data. A Spyre Accelerator add-on, expected later this year, will enable generative AI features such as large language models.

More than 100 clients contributed to the development of the z17, which also supports a forthcoming operating system, z/OS 3.2. The OS update is set to enable hybrid cloud data processing and enhanced NoSQL support.

IBM says the z17 brings AI to the core of enterprise infrastructure, enabling organisations to tap into large data sets securely and efficiently, with strong performance across both traditional and AI workloads.

For more information on these topics, visit diplomacy.edu.

Bank appoints head of AI enablement

Standard Chartered has appointed David Hardoon as its global head of AI enablement, further embedding AI across its operations.

Based in Singapore, he will report to group chief data officer Mohammed Rahim.

Hardoon will lead AI governance and identify areas where AI can enhance productivity, efficiency, and client experiences. His appointment follows the bank’s recent rollout of a generative AI tool to over 70,000 employees across 41 markets.

The bank has been steadily introducing AI-driven tools, including a smart video column to provide insights for clients in Asia. It plans further expansion of its internal AI systems across additional regions.

With more than 20 years of experience in data and AI, including with Singapore’s central bank, Hardoon is expected to guide the responsible and strategic use of AI technologies across Standard Chartered’s global footprint.

For more information on these topics, visit diplomacy.edu.

FBI and INTERPOL investigate Oracle Health data breach

Oracle Health has reportedly suffered a data breach that compromised sensitive patient information stored by American hospitals.

The cyberattack, discovered in February 2025, involved threat actors using stolen customer credentials to access an old Cerner server that had not yet migrated to the Oracle Cloud. Oracle acquired healthcare tech company Cerner in 2022 for $28.3 billion.

In notifications sent to affected customers, Oracle acknowledged that data had been downloaded by unauthorised users. The FBI is said to be investigating the incident and exploring whether ransom demands are involved. Oracle has yet to publicly comment on the breach.

The news comes amid growing cybersecurity concerns. A recent report from Horizon3.ai revealed that over half of IT professionals delay critical software patches, leaving organisations vulnerable. Meanwhile, OpenAI has boosted its bug bounty rewards to encourage more proactive security research.

In a broader crackdown on cybercrime, INTERPOL recently arrested over 300 suspects in seven African countries for online scams, seizing devices, properties, and other assets linked to more than 5,000 victims.

For more information on these topics, visit diplomacy.edu.

DeepSeek teases next big AI model

Chinese AI startup DeepSeek has introduced a new reasoning method aimed at enhancing the performance of large language models (LLMs).

The approach, developed in partnership with researchers from Tsinghua University, combines generative reward modelling (GRM) and self-principled tuning to improve the speed and quality of LLM outputs.

According to a recently published paper, the resulting DeepSeek-GRM models achieved competitive results, even outperforming public reward models in some instances. Although DeepSeek has expressed plans to open-source the GRM models, no release date has been confirmed.

The announcement comes amid growing speculation about DeepSeek’s next major release. The company gained global recognition earlier this year with its R1 reasoning model, which outperformed some older models like the original ChatGPT.

The R1’s success was notable not just for its performance but also for being open source and developed on a relatively modest budget. Industry observers believe DeepSeek is preparing to unveil the R2 model soon, possibly by the end of the month, though the company has declined to comment officially.

Founded in Hangzhou in 2023 by entrepreneur Liang Wenfeng, DeepSeek has prioritised research over public relations, quietly building momentum in the AI sector.

The company recently showcased DeepSeek-V3-0324, an upgraded model with improved reasoning, better web development capabilities and enhanced Chinese writing. DeepSeek has also made parts of its codebase available to the public, signalling a commitment to open development.

Backed by High-Flyer Quant, Liang’s hedge fund, the startup is emerging as a serious contender in the global AI race, drawing praise from the president of China, Xi Jinping, for its innovation and strategic significance.

For more information on these topics, visit diplomacy.edu.

Dutch researchers to face new security screenings

The Dutch government has proposed new legislation requiring background checks for thousands of researchers working with sensitive technologies. The plan, announced by Education Minister Eppo Bruins, aims to block foreign intelligence from accessing high-risk scientific work.

Around 8,000 people a year, including Dutch citizens, would undergo screenings involving criminal records, work history, and possible links to hostile regimes.

Intelligence services would support the process, which targets sectors like AI, quantum computing, and biotech.

Universities worry the checks may deter global talent due to delays and bureaucracy. Critics also highlight a loophole: screenings occur only once, meaning researchers could still be approached by foreign governments after being cleared.

While other countries are introducing similar measures, the Netherlands will attempt to avoid unnecessary delays. Officials admit, however, that no system can eliminate all risks.

For more information on these topics, visit diplomacy.edu.

Man uses AI avatar in New York court

A 74-year-old man representing himself in a New York State appeal has apologised after using an AI-generated avatar during court proceedings.

Jerome Dewald submitted a video featuring a youthful digital figure to deliver part of his legal argument, prompting confusion and criticism from the judges. One justice described the move as misleading, expressing frustration over the lack of prior disclosure.

Dewald later explained he intended to ease his courtroom anxiety and present his case more clearly, not to deceive.

In a letter to the judges, he acknowledged that transparency should have taken priority and accepted responsibility for the confusion caused. His case, a contract dispute with a former employer, remains under review by the appellate court.

The incident has reignited debate over the role of AI in legal settings. Recent years have seen several high-profile cases where AI-generated content introduced errors or false information, highlighting the risks of using generative technology without proper oversight.

Legal experts say such incidents are becoming increasingly common as AI tools become more accessible.

For more information on these topics, visit diplomacy.edu.

OpenAI’s Sam Altman responds to Miyazaki’s AI animation concerns

The recent viral trend of AI-generated Ghibli-style images has taken the internet by storm. Using OpenAI’s GPT-4o image generator, users have been transforming photos, from historic moments to everyday scenes, into Studio Ghibli-style renditions.

A trend like this has caught the attention of notable figures, including celebrities and political personalities, sparking both excitement and controversy.

While some praise the trend for democratising art, others argue that it infringes on copyright and undermines the efforts of traditional artists. The debate intensified when Hayao Miyazaki, the co-founder of Studio Ghibli, became a focal point.

In a 2016 documentary, Miyazaki expressed his disdain for AI in animation, calling it ‘an insult to life itself’ and warning that humanity is losing faith in its creativity.

OpenAI’s CEO, Sam Altman, recently addressed these concerns, acknowledging the challenges posed by AI in art but defending its role in broadening access to creative tools. Altman believes that technology empowers more people to contribute, benefiting society as a whole, even if it complicates the art world.

Miyazaki’s comments and Altman’s response highlight a growing divide in the conversation about AI and creativity. As the debate continues, the future of AI in art remains a contentious issue, balancing innovation with respect for traditional artistic practices.

For more information on these topics, visit diplomacy.edu.

Google blends AI mode with Lens

Google is enhancing its experimental AI Mode by combining the visual power of Google Lens with the conversational intelligence of Gemini, offering users a more dynamic way to search.

Instead of typing queries alone, users can now upload photos or take snapshots with their smartphone to receive more insightful answers.

The new feature moves beyond traditional reverse image search. For instance, you could snap a photo of a mystery kitchen tool and ask, ‘What is this, and how do I use it?’, receiving not only a helpful explanation but links to buy it and even video demonstrations.

Rather than focusing on a single object, AI Mode can interpret entire scenes, offering context-aware suggestions.

Take a photo of a bookshelf, a meal, or even a cluttered drawer, and AI Mode will identify items and describe how they relate to each other. It might suggest recipes using the ingredients shown, help identify a misplaced phone charger, or recommend the order to read your books.

Behind the scenes, the system runs multiple AI agents to analyse each element, providing layered, tailored responses.

Although other platforms like ChatGPT also support image recognition, Google’s strength lies in its decades of search data and visual indexing. Currently, the feature is accessible to Google One AI Premium subscribers or those enrolled in Search Labs via the Google mobile app.

For more information on these topics, visit diplomacy.edu.