Google expands NotebookLM with curated content and mobile access

While Gemini often dominates attention in Google’s AI portfolio, other innovative tools deserve the spotlight. One standout is NotebookLM, a virtual research assistant that helps users organise and interact with complex information across various subjects.

NotebookLM creates structured notebooks from curated materials, allowing meaningful engagement with the content. It supports dynamic features, including summaries and transformation options like Audio Overview, making research tasks more intuitive and efficient.

According to Google, featured notebooks are built using information from respected authors, academic institutions, and trusted nonprofits. Current topics include Shakespeare, Yellowstone National Park and more, offering a wide spectrum of well-sourced material.

Featured notebooks function just like regular ones, with added editorial quality. Users can navigate, explore, and repurpose content in ways that support individual learning and project needs. Google has confirmed the collection will grow over time.

NotebookLM remains in early development, yet the tool already shows potential for transforming everyday research tasks. Google also plans tighter integration with its other productivity tools, including Docs and Slides.

The tool significantly reduces the effort traditionally required for academic or creative research. Structured data presentation, combined with interactive features, makes information easier to consume and act upon.

NotebookLM was initially released on desktop but is now also available as a mobile app. Users can download it via the Google Play Store to create notebooks, add content, and stay productive from anywhere.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How AI-generated video is reshaping the film industry

AI-generated video has evolved at breakneck speed, moving from distorted and unconvincing clips to hyper-realistic creations that rival traditional filmmaking. What was once a blurry, awkward depiction of Will Smith eating spaghetti in 2023 is now flawlessly rendered on platforms like Google’s Veo 3.

In just months, tools such as Luma Labs’ Dream Machine, OpenAI’s Sora, and Runway AI’s Gen-4 have redefined what’s possible, drawing the attention of Hollywood studios, advertisers, and artists eager to test the limits of this new creative frontier.

Major industry players are already experimenting with AI for previsualisation, visual effects, and even entire animated films. Lionsgate and AMC Networks have partnered with Runway AI, with executives exploring AI-generated family-friendly versions of blockbuster franchises like John Wick and The Hunger Games.

The technology drastically cuts costs for complex scenes, making it possible to create elaborate previews—like a snowstorm filled with thousands of soldiers—for a fraction of the traditional price. However, while some see AI as a tool to expand creative possibilities, resistance remains strong.

Critics argue that AI threatens traditional artistic processes, raises ethical concerns over energy use and data training, and risks undermining human creativity. The debate mirrors past technological shifts in entertainment—inevitable yet disruptive.

As Runway and other pioneers push toward immersive experiences in augmented and virtual reality, the future of filmmaking may no longer be defined solely by Hollywood, but by anyone with access to these powerful tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Military AI and the void of accountability

In her blog post ‘Military AI: Operational dangers and the regulatory void,’ Julia Williams warns that AI is reshaping the battlefield, shifting from human-controlled systems to highly autonomous technologies that make life-and-death decisions. From the United States’ Project Maven to Israel’s AI-powered targeting in Gaza and Ukraine’s semi-autonomous drones, military AI is no longer a futuristic concept but a present reality.

While designed to improve precision and reduce risks, these systems carry hidden dangers—opaque ‘black box’ decisions, biases rooted in flawed data, and unpredictable behaviour in high-pressure situations. Operators either distrust AI or over-rely on it, sometimes without understanding how conclusions are reached, creating a new layer of risk in modern warfare.

Bias remains a critical challenge. AI can inherit societal prejudices from the data it is trained on, misinterpret patterns through algorithmic flaws, or encourage automation bias, where humans trust AI outputs even when they shouldn’t.

These flaws can have devastating consequences in military contexts, leading to wrongful targeting or escalation. Despite attempts to ensure ‘meaningful human control’ over autonomous weapons, the concept lacks clarity, allowing states and manufacturers to apply oversight unevenly. Responsibility for mistakes remains murky—should it lie with the operator, the developer, or the machine itself?

That uncertainty feeds into a growing global security crisis. Regulation lags far behind technological progress, with international forums disagreeing on how to govern military AI.

Meanwhile, an AI arms race accelerates between the US and China, driven by private-sector innovation and strategic rivalry. Export controls on semiconductors and key materials only deepen mistrust, while less technologically advanced nations fear both being left behind and becoming targets of AI warfare. The risk extends beyond states, as rogue actors and non-state groups could gain access to advanced systems, making conflicts harder to contain.

As Williams highlights, the growing use of military AI threatens to speed up the tempo of conflict and blur accountability. Without strong governance and global cooperation, it could escalate wars faster than humans can de-escalate them, shifting the battlefield from soldiers to civilian infrastructure and leaving humanity vulnerable to errors we may not survive.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube tightens rules on AI-only videos

YouTube will begin curbing AI-generated content lacking human input to protect content quality and ad revenue. Since July 15, creators must disclose the use of AI and provide genuine creative value to qualify for monetisation.

The platform’s clampdown aims to prevent a flood of low-quality videos, known as ‘AI slop’, that risk overwhelming its algorithm and lowering ad returns. Analysts say Google’s new stance reflects the need to balance AI leadership with platform integrity.

YouTube will still allow AI-assisted content, but it insists creators must offer original contributions such as commentary, editing, or storytelling. Without this, AI-only videos will no longer earn advertising revenue.

The move also addresses rising concerns around copyright, ownership and algorithm overload, which could destabilise the platform’s delicate content ecosystem. Experts warn that unregulated AI use may harm creators who produce high-effort, original material.

Stakeholders say the changes will benefit creators focused on meaningful content while preserving advertiser trust and fair revenue sharing across millions of global partners. YouTube’s approach signals a shift towards responsible AI integration in media platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Asia’s humanities under pressure from AI surge

Universities across Asia, notably in China, are slashing liberal arts enrollments to expand STEM and AI programmes. Institutions like Fudan and Tsinghua are reducing intake for humanities subjects, as policymakers push for a high-tech workforce.

Despite this shift, educators argue that sidelining subjects like history, philosophy, and ethics threatens the cultivation of critical thinking, moral insight, and cultural literacy, which are increasingly necessary in an AI-saturated world.

They contend that humanistic reasoning remains essential for navigating AI’s societal and ethical complexities.

Innovators are pushing for hybrid models of education. Humanities courses are evolving to incorporate AI-driven archival research, digital analysis, and data-informed argumentation, turning liberal arts into tools for interpreting technology, rather than resisting it.

Supporters emphasise that liberal arts students offer distinct advantages: they excel in communication, ethical judgement, storytelling and adaptability, capacities that machines lack. These soft skills are increasingly valued in workplaces that integrate AI.

Analysts predict that the future lies not in abandoning the humanities but in transforming them. When taught alongside technical disciplines, through STEAM initiatives and cross-disciplinary curricula, liberal arts can complement AI, ensuring that technology remains anchored in human values.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Zuckerberg unveils Meta’s multi-gigawatt AI data clusters

Meta Platforms is building several of the world’s largest data centres to power its AI ambitions, with the first facility expected to go online in 2026.

Chief Executive Mark Zuckerberg revealed on Threads that the site, called Prometheus, will be the first of multiple ‘titan clusters’ designed to support AI development instead of relying on existing infrastructure.

Frustrated by earlier AI efforts, Meta is investing heavily in talent and technology. The company has committed up to $72 billion towards AI and data centre expansion, while Zuckerberg has personally recruited high-profile figures from OpenAI, DeepMind, and Apple.

That includes appointing Scale AI’s Alexandr Wang as chief AI officer through a $14.3 billion stake deal and securing Ruoming Pang with a compensation package worth over $200 million.

The facilities under construction will have multi-gigawatt capacity, placing Meta ahead of rivals such as OpenAI and Oracle in the race for large-scale AI infrastructure.

One supercluster in Richland Parish, Louisiana, is said to cover an area nearly the size of Manhattan instead of smaller conventional data centre sites.

Zuckerberg confirmed that Meta is prepared to invest ‘hundreds of billions of dollars’ into building superintelligence capabilities, using revenue from its core advertising business on platforms like Facebook and Instagram to fund these projects instead of seeking external financing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools fuel smarter and faster marketing decisions

Nearly half of UK marketers surveyed already harness AI for essential tasks such as market research, campaign optimisation, creative asset testing, and budget allocation.

Specifically, 46 % use AI for research, 44 % generate multiple asset variants, 43.7 % optimise mid‑campaign content, and over 41 % apply machine learning to audience targeting and media planning.

These tools enable faster ideation, real‑time asset iteration, and smarter spend decisions. Campaigns can now be A/B tested in moments rather than days, freeing teams to focus on higher‑level strategic and creative work.

Industry leaders emphasise that AI serves best as a ‘co‑pilot‘, enhancing productivity and insight, not replacing human creativity.

Responsible deployment requires careful prompt design, ongoing ethical review, and maintaining a clear brand identity in increasingly automated processes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia chief says Chinese military unlikely to use US chips

Nvidia’s CEO, Jensen Huang, has downplayed concerns over Chinese military use of American AI technology, stating it is improbable that China would risk relying on US-made chips.

He noted the potential liabilities of using foreign tech, which could deter its adoption by the country’s armed forces.

In an interview on CNN’s Fareed Zakaria GPS, Huang responded to Washington’s growing export controls targeting advanced AI hardware sales to China.

He suggested the military would likely avoid US technology to reduce exposure to geopolitical risks and sanctions.

The Biden administration had tightened restrictions on AI chip exports, citing national security and fears that cutting-edge processors might boost China’s military capabilities.

Nvidia, whose chips are central to global AI development, has seen its access to the Chinese market increasingly limited under these rules.

While Nvidia remains a key supplier in the AI sector, Huang’s comments may ease some political pressure around the company’s overseas operations.

The broader debate continues over balancing innovation, commercial interest and national security in the AI age.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chills UK job hiring, especially among tech and finance roles

Recent data reveals a sharp drop in UK job openings for roles at risk of automation, with postings in tech and financial sectors falling by approximately 38%, compared to less exposed fields.

The shift underscores how AI influences workforce planning, as employers reduce positions most vulnerable to machine replacement.

Graduate job seekers are bearing the brunt of this trend. Since the debut of tools like ChatGPT, entry-level roles have been withdrawn more swiftly, as firms opt to apply AI solutions over traditional hiring. However, this marks a significant change in early career pathways.

Although macroeconomic factors, such as rising wages and interest rate pressures, are also at play, the rapid pace of AI integration into hiring, particularly via proactive recruitment freezes, signals a fundamental transformation.

As AI tools become integral, firms across the UK are rethinking how, when, and who they recruit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI’s future in banking depends on local solutions and trust

According to leading industry voices, banks and financial institutions are expected to play a central role in accelerating AI adoption across African markets.

Experts at the ACAMB stakeholders’ conference in Lagos stressed the need for region-specific AI solutions to meet Africa’s unique financial needs.

Niyi Yusuf, Chairman of the Nigerian Economic Summit Group, highlighted AI’s evolution since the 1950s and its growing influence on modern banking.

He called for AI algorithms tailored to local challenges, rather than relying on those designed for advanced economies.

Yusuf noted that banks have long used AI to enhance efficiency and reduce fraud, but warned that customer trust must remain at the heart of digital transformation. He said the success of future innovations depends on preserving transparency and safeguarding data.

Professor Pius Olarenwaju of the CIBN described AI as a general-purpose technology driving the fourth industrial revolution. He warned that resisting adoption would risk excluding stakeholders from the future of financial services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!