Google has rolled out new AI features for YouTube Shorts, including an image-to-video tool powered by its Veo 2 model. The update lets users convert still images into six-second animated clips, such as turning a static group photo into a dynamic scene.
Creators can also experiment with immersive AI effects that stylise selfies or simple drawings into themed short videos. These features aim to enhance creative expression and are currently available in the US, Canada, Australia and New Zealand, with global rollout expected later this year.
A new AI Playground hub has also been launched to house all generative tools, including video effects and inspiration prompts. Users can find the hub by tapping the Shorts camera’s ‘create’ button and then the sparkle icon in the top corner.
Google plans to introduce even more advanced tools with the upcoming Veo 3 model, which will support synchronised audio generation. The company is positioning YouTube Shorts as a key platform for AI-driven creativity in the video content space.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
YouTube is trialling two new features to improve user engagement and content creation. One enhances comment readability, while the other helps creators produce music using AI for Shorts.
A new threaded layout is being tested to organise comment replies under the original post, allowing more explicit and focused conversations. Currently, this feature is limited to a small group of Premium users on mobile.
YouTube also expands Dream Track, an AI-powered tool that creates 30-second music clips from simple text prompts. Creators can generate sounds matching moods like ‘chill piano melody’ or ‘energetic pop beat’, with the option to include AI-generated vocals styled after popular artists.
Both features are available only in the US during the testing phase, with no set date for international release. YouTube’s gradual updates reflect a shift toward more intuitive user experiences and creative flexibility on the platform.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Amazon is shutting down its AI research lab in Shanghai, marking another step in its gradual withdrawal from China. The move comes amid continuing US–China trade tensions and a broader trend of American tech companies reassessing their presence in the country.
The company said the decision was part of a global streamlining effort rather than a response to AI concerns.
A spokesperson for AWS said the company had reviewed its organisational priorities and decided to cut some roles across certain teams. The exact number of job losses has not been confirmed.
Before Amazon’s confirmation, one of the lab’s senior researchers noted on WeChat that the Shanghai site was the final overseas AWS AI research lab and attributed its closure to shifts in US–China strategy.
The team had built a successful open-source graph neural network framework known as DGL, which reportedly brought in nearly $1 billion in revenue for Amazon’s e-commerce arm.
Amazon has been reducing its footprint in China for several years. It closed its domestic online marketplace in 2019, halted Kindle sales in 2022, and recently laid off AWS staff in the US.
Other tech giants including IBM and Microsoft have also shut down China-based research units this year, while some Chinese AI firms are now relocating operations abroad instead of remaining in a volatile domestic environment.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A team at the University of Maryland found that adversarial attacks easily strip most watermarking technologies designed to label AI‑generated images. Their study reveals that even visible watermarks fail to indicate content provenance reliably.
The US researchers tested low‑perturbation invisible watermarks and more robust visible ones, demonstrating that adversaries can easily remove or forge marks. Lead author Soheil Feizi noted the technology is far from foolproof, warning that ‘we broke all of them’.
Despite these concerns, experts argue that watermarking can still be helpful in a broader detection strategy. UC Berkeley professor Hany Farid said robust watermarking is ‘part of the solution’ when combined with other forensic methods.
Tech giants and researchers continue to develop watermarking tools like Google DeepMind’s SynthID, though such systems are not considered infallible. The consensus emerging from recent tests is that watermarking alone cannot be relied upon to counter deepfake threats.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US federal authorities have issued a joint warning over a spike in ransomware attacks by the Interlock group, which has been targeting healthcare and public services across North America and Europe.
The alert was released by the FBI, CISA, HHS and MS-ISAC, following a surge in activity throughout June.
Interlock operates as a ransomware-as-a-service scheme and first emerged in September 2024. The group uses double extortion techniques, not only encrypting files but also stealing sensitive data and threatening to leak it unless a ransom is paid.
High-profile victims include DaVita, Kettering Health and Texas Tech University Health Sciences Center.
Rather than relying on traditional methods alone, Interlock often uses compromised legitimate websites to trigger drive-by downloads.
The malicious software is disguised as familiar tools like Google Chrome or Microsoft Edge installers. Remote access trojans are then used to gain entry, maintain persistence using PowerShell, and escalate access using credential stealers and keyloggers.
Authorities recommend several countermeasures, such as installing DNS filtering tools, using web firewalls, applying regular software updates, and enforcing strong access controls.
They also advise organisations to train staff in recognising phishing attempts and to ensure backups are encrypted, secure and kept off-site instead of stored within the main network.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Deputy Minister Nezar Patria says the roadmap aims to clarify the country’s AI market potential, particularly in sectors like health and agriculture, and provide guidance on infrastructure, regulation, and investment pathways.
Already, global tech firms are demonstrating confidence in the country’s potential. Microsoft has pledged $1.7 billion to expand cloud and AI capabilities, while Nvidia partnered on a $200 million AI centre project. These investments align with Jakarta’s efforts to build skill pipelines and computational capacity.
In parallel, Indonesia is pitching into critical minerals extraction to strengthen its semiconductor and AI hardware supply chains, and has invited foreign partners, including from the United States, to invest. These initiatives aim to align resource security with its AI ambitions.
However, analysts caution that Indonesia must still address significant gaps: limited AI-ready infrastructure, a shortfall in skilled tech talent, and governance concerns such as data privacy and IP protection.
The new AI roadmap will bridge these deficits and streamline regulation without stifling innovation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A GIS Reports analysis emphasises that as AI systems become pervasive, they create significant global challenges, including surveillance risks, algorithmic bias, cyber vulnerabilities, and environmental pressures.
Unlike legacy regulatory regimes, AI technology blurs the lines among privacy, labour, environmental, security, and human rights domains, demanding a uniquely coordinated governance approach.
The report highlights that leading AI research and infrastructure remain concentrated in advanced economies: over half of general‑purpose AI models originated in the US, exacerbating global inequalities.
Meanwhile, facial recognition or deepfake generators threaten civic trust, amplify disinformation, and even provoke geopolitical incidents if weaponised in defence systems.
The analysis calls for urgent public‑private cooperation and a new regulatory paradigm to address these systemic issues.
Recommendations include forming international expert bodies akin to the IPCC, and creating cohesive governance that bridges labour rights, environmental accountability, and ethical AI frameworks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Utilities nationwide are struggling to keep up, expanding infrastructure and revising rate structures to accommodate an influx of power-hungry facilities.
Regions like Northern Virginia have become focal points, where dense data centre clusters consume tens of megawatts each and create years-long delays for new connections.
In response, tech firms and utilities are considering a mix of solutions, including on-site natural gas generation, investments in small nuclear reactors, and greater reliance on renewable sources.
At the federal level, streamlined permitting and executive actions are used to fast-track grid and plant development.
‘The scale of AI’s power appetite is unprecedented,’ said Dr Elena Martinez, senior grid strategist at the Centre for Energy Innovation. ‘Utilities must pivot now, combining smart-grid tech, diverse energy sources and regulatory agility to avoid systemic bottlenecks.’
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Massachusetts libraries face sweeping service reductions as federal funding cuts threaten critical educational and digital access programmes. Local and major libraries are bracing for the loss of key resources including summer reading initiatives, online research tools, and English language classes.
The Massachusetts Board of Library Commissioners (MBLC) said it has already lost access to 30 of 34 databases it once offered. Resources such as newspaper archives, literacy support for the blind and incarcerated, and citizenship classes have also been cancelled due to a $3.6 million shortfall.
Communities unable to replace federal grants with local funds will be disproportionately affected. With over 800 library applications for mobile internet hot spots now frozen, officials warn that students and jobseekers may lose vital lifelines to online learning, healthcare and employment.
The cuts are part of broader efforts by the Trump administration to shrink federal institutions, targeting what it deems anti-American programming. Legislators and library leaders say the result will widen the digital divide and undercut libraries’ role as essential pillars of equitable access
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new national survey shows that roughly 72% of American teenagers, aged 13 to 17, have tried AI companion apps such as Replika, Character.AI, and Nomi, with over half interacting with them regularly.
Although some teens report benefits like practising conversation skills or emotional self-expression, significant safety concerns have emerged.
Around 34% have been left uncomfortable by the bot’s behaviour, and one-third have turned to AI for advice on serious personal issues. Worryingly, nearly a quarter of users disclosed their real names or locations in chats.
Despite frequent use, most teens still prefer real friendships—two-thirds say AI interactions are less satisfying, and 80% maintain stronger ties to human friends.
Experts warn that teens are especially vulnerable to emotional dependency, manipulative responses, and data privacy violations through these apps.
Youth advocates call for mandatory age verification, better content moderation, and expanded AI literacy education, arguing that minors should not use companionship bots until more regulations are in place and platforms become truly safe for young users.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!