Canva rolls out text-to-video tool for creators

Canva has launched a new tool powered by Google’s Veo 3 model, allowing users to generate short cinematic video clips using simple text prompts. Known as ‘Create a Video Clip’, the feature produces eight-second videos with sound directly inside the Canva platform.

This marks one of the first commercial uses of Veo 3, which debuted last month. The AI tool is available to Canva Pro, Teams, Enterprise and Nonprofit users, who can generate up to five clips per month initially.

Danny Wu, Canva’s head of AI products, said the feature simplifies video creation with synchronised dialogue, sound effects and editing options. Users can integrate the clips into presentations, social media designs or other formats via Canva’s built-in video editor.

Canva is also extending the tool to users of Leonardo.Ai, a related image generation service. The feature is protected by Canva Shield, a content moderation and indemnity framework aimed at enterprise-level security and trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI helps Google curb scams and deepfakes in India

Google has introduced its Safety Charter for India to combat rising online fraud, deepfakes and cybersecurity threats. The charter outlines a collaborative plan focused on user safety, responsible AI development and protection of digital infrastructure.

AI-powered measures have already helped Google detect 20 times more scam-related pages, block over 500 million scam messages monthly, and issue 2.5 billion suspicious link warnings. Its ‘Digikavach’ programme has reached over 177 million Indians with fraud prevention tools and awareness campaigns.

Google Pay alone averted financial fraud worth ₹13,000 crore in 2024, while Google Play Protect stopped nearly 6 crore high-risk app installations. These achievements reflect the company’s ‘AI-first, secure-by-design’ strategy for early threat detection and response.

The tech giant is also collaborating with IIT-Madras on post-quantum cryptography and privacy-first technologies. Through language models like Gemini and watermarking initiatives such as SynthID, Google aims to build trust and inclusion across India’s digital ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hexagon unveils AEON humanoid robot powered by NVIDIA to build industrial digital twins

As industries struggle to fill 50 million job vacancies globally, Hexagon has unveiled AEON — a humanoid robot developed in collaboration with NVIDIA — to tackle labour shortages in manufacturing, logistics and beyond.

AEON can perform complex tasks like reality capture, asset inspection and machine operation, thanks to its integration with NVIDIA’s full-stack robotics platform.

By simulating skills using NVIDIA Isaac Sim and training in Isaac Lab, AEON drastically reduced its development time, mastering locomotion in weeks instead of months.

The robot is built using NVIDIA’s trio of AI systems, combining simulation with onboard intelligence powered by Jetson Orin and IGX Thor for real-time navigation and safe collaboration.

AEON will be deployed in factories and warehouses, scanning environments to build high-fidelity digital twins through Hexagon’s cloud-based Reality Cloud Studio and NVIDIA Omniverse.

Hexagon believes AEON can bring digital twins into mainstream use, streamlining industrial workflows through advanced sensor fusion and simulation-first AI. The company is also leveraging synthetic motion data to accelerate robot learning, pushing the boundaries of physical AI for real-world applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta offers $100M bonuses to poach OpenAI talent but Altman defends mission-driven culture

Meta has reportedly attempted to lure top talent from OpenAI with signing bonuses exceeding $100 million, according to OpenAI’s CEO Sam Altman.

Speaking on a podcast hosted by his brother, Jack Altman, he revealed that Meta has offered extremely high compensation to key OpenAI staff, yet none have accepted the offers.

Meta CEO Mark Zuckerberg is said to be directly involved in recruiting for a new ‘superintelligence’ team as part of the latest AI push.

The tech giant recently announced a $14.3 billion investment in Scale AI and brought Scale’s CEO, Alexandr Wang, on board. Altman believes Meta sees ChatGPT not only as competition for Google but as a potential rival to Facebook regarding user attention.

Altman questioned whether such high-compensation strategies foster the right environment, suggesting that culture cannot be built on upfront financial incentives alone.

He stressed that OpenAI prefers aligning rewards with its mission instead of offering massive pay packets. In his view, sustainable innovation stems from purpose, not payouts.

While recognising Meta’s persistence in the AI race, Altman suggested that the company will likely try again if the current effort fails. He highlighted a cultural difference, saying OpenAI has built a team focused on consistent innovation — something he believes Meta still struggles to understand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake technology fuels new harassment risks

A growing threat of AI-generated media is reshaping workplace harassment, with deepfakes used to impersonate colleagues and circulate fabricated explicit content in the US. Recent studies found that almost all deepfakes were sexually explicit by 2023, often targeting women.

Organisations risk liability under existing laws if deepfake incidents create hostile work environments. New legislation like the TAKE IT DOWN Act and Florida’s Brooke’s Law now mandates rapid removal of non-consensual intimate imagery.

Employers are also bracing for proposed rules requiring strict authentication of AI-generated evidence in legal proceedings. Industry experts advise an urgent review of harassment and acceptable use policies, clear incident response plans and targeted training for HR, legal and IT teams.

Protective measures include auditing insurance coverage for synthetic media claims and staying abreast of evolving state and federal regulations. Forward-looking employers already embed deepfake awareness into their harassment prevention and cybersecurity training to safeguard workplace dignity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Plumbing still safe as AI replaces office jobs, says AI pioneer

Nobel Prize-winning scientist Geoffrey Hinton, often called the ‘Godfather of AI,’ has warned that many intellectual jobs are at risk of being replaced by AI—while manual trades like plumbing may remain safe for years to come.

Speaking on the Diary of a CEO podcast, Hinton predicted that AI will eventually surpass human capabilities across most fields, but said it will take far longer to master physical skills. ‘A good bet would be to be a plumber,’ he noted, citing the complexity of physical manipulation as a barrier for AI.

Hinton, known for his pioneering work on neural networks, said ‘mundane intellectual labour’ would be among the first to go. ‘AI is just going to replace everybody,’ he said, naming paralegals and call centre workers as particularly vulnerable.

He added that while highly skilled roles or those in sectors with overwhelming demand—like healthcare—may endure, most jobs are unlikely to escape the wave of disruption. ‘Most jobs, I think, are not like that,’ he said, forecasting widespread upheaval in the labour market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Workplace deepfake abuse: What employers must know

Deepfake technology—AI-generated videos, images, and audio—has entered the workplace in alarming ways.

Once difficult to produce, deepfakes are now widely accessible and are being used to harass, impersonate, or intimidate employees. These synthetic media attacks can cause deep psychological harm, damage reputations, and expose employers to serious legal risks.

While US federal law hasn’t yet caught up, new legislation like the Take It Down Act and Florida’s Brooke’s Law require platforms to remove non-consensual deepfake content within 48 hours.

Meanwhile, employers could face claims under existing workplace laws if they fail to act on deepfake harassment. Inaction may lead to lawsuits for creating a hostile environment or for negligent oversight.

Most workplace policies still don’t mention synthetic media and something like this creates blind spots, especially during investigations, where fake images or audio could wrongly influence decisions.

Employers need to shift how they assess evidence and protect both accused and accuser fairly. It’s time to update handbooks, train staff, and build clear response plans that include digital impersonation and deepfake abuse.

By treating deepfakes as a modern form of harassment instead of just a tech issue, organisations can respond faster, protect staff, and maintain trust. Proactive training, updated policies, and legal awareness will be crucial to workplace safety in the age of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oxford physicists set new qubit accuracy record

Physicists at the University of Oxford have achieved a ground‑breaking error rate in quantum logic operations, reducing it to just 0.000015 percent, one mistake in 6.7 million operations. The result marks nearly a ten‑fold improvement over their previous record set in 2014.

The team used a trapped calcium ion qubit controlled by microwave signals instead of lasers to achieve high stability at room temperature and eliminate the need for magnetic shielding. However, this method offers cheaper, more robust control that fits with ion‑trap chip technology.

Reducing the error rate helps shrink the infrastructure needed for error correction, meaning future quantum computers could be smaller, faster and more efficient. They still lag, with around one in 2,000 error rates, highlighting further challenges for full‑scale quantum systems.

The findings, published in Physical Review Letters, bring practical quantum computing a significant step closer. The Oxford researchers involved include Professor David Lucas, Molly Smith, Aaron Leu and Dr Mario Gely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia’s Jensen Huang clashes with Anthropic CEO over AI Job loss predictions

A fresh dispute has erupted between Nvidia and Anthropic after CEO Dario Amodei warned that AI could eliminate 50% of entry-level white-collar jobs in the next five years, potentially causing a 20% unemployment spike.

Nvidia’s Jensen Huang dismissed the claim, saying at VivaTech in Paris that he ‘pretty much disagreed with almost everything’ Amodei says, accusing him of fearmongering and advocating for a monopoly on AI development.

Huang emphasized the importance of open, transparent development, stating, ‘If you want things to be done safely and responsibly, you do it in the open… Don’t do it in a dark room and tell me it’s safe.’

Anthropic pushed back, saying Amodei supports national AI transparency standards and never claimed only Anthropic can build safe AI.

The clash comes amid growing scrutiny of Anthropic, which faces a lawsuit from Reddit for allegedly scraping content without consent and controversy over a Claude 4 Opus test that simulated blackmail scenarios.

The companies have also clashed over AI export controls to China, with Anthropic urging tighter rules and Nvidia denying reports that its chips were smuggled using extreme methods like fake pregnancies or shipments with live lobsters.

Huang maintains an optimistic outlook, saying AI will create new jobs in fields like prompt engineering. At the same time, Amodei has consistently warned that the economic fallout could be severe, rejecting universal basic income as a long-term solution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

G7 trip could shift political balance for President Lee

President Lee Jae-myung is making his first major diplomatic appearance at the G7 summit in Canada, just two weeks into office. The trip marks a reset of South Korea’s foreign policy, focusing on pragmatic diplomacy prioritising national interest.

Officials say the visit aims to restart high-level talks after six months of stagnation, and could include a pivotal meeting with US President Donald Trump. Trade tensions, defence costs and the future of US troops in South Korea are expected to dominate any bilateral agenda.

Lee is also preparing for potential talks with Japanese Prime Minister Shigeru Ishiba as his administration tests its strategy amid rising US-China rivalry. A trilateral summit is considered, adding further weight to this diplomatic debut.

The summit’s outcome could influence Lee’s political standing at home, where leaders have often used foreign success to strengthen domestic reforms. However, failure to secure tangible results could expose the new administration to early criticism.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!