YouTube tightens rules on AI-only videos

YouTube will begin curbing AI-generated content lacking human input to protect content quality and ad revenue. Since July 15, creators must disclose the use of AI and provide genuine creative value to qualify for monetisation.

The platform’s clampdown aims to prevent a flood of low-quality videos, known as ‘AI slop’, that risk overwhelming its algorithm and lowering ad returns. Analysts say Google’s new stance reflects the need to balance AI leadership with platform integrity.

YouTube will still allow AI-assisted content, but it insists creators must offer original contributions such as commentary, editing, or storytelling. Without this, AI-only videos will no longer earn advertising revenue.

The move also addresses rising concerns around copyright, ownership and algorithm overload, which could destabilise the platform’s delicate content ecosystem. Experts warn that unregulated AI use may harm creators who produce high-effort, original material.

Stakeholders say the changes will benefit creators focused on meaningful content while preserving advertiser trust and fair revenue sharing across millions of global partners. YouTube’s approach signals a shift towards responsible AI integration in media platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Asia’s humanities under pressure from AI surge

Universities across Asia, notably in China, are slashing liberal arts enrollments to expand STEM and AI programmes. Institutions like Fudan and Tsinghua are reducing intake for humanities subjects, as policymakers push for a high-tech workforce.

Despite this shift, educators argue that sidelining subjects like history, philosophy, and ethics threatens the cultivation of critical thinking, moral insight, and cultural literacy, which are increasingly necessary in an AI-saturated world.

They contend that humanistic reasoning remains essential for navigating AI’s societal and ethical complexities.

Innovators are pushing for hybrid models of education. Humanities courses are evolving to incorporate AI-driven archival research, digital analysis, and data-informed argumentation, turning liberal arts into tools for interpreting technology, rather than resisting it.

Supporters emphasise that liberal arts students offer distinct advantages: they excel in communication, ethical judgement, storytelling and adaptability, capacities that machines lack. These soft skills are increasingly valued in workplaces that integrate AI.

Analysts predict that the future lies not in abandoning the humanities but in transforming them. When taught alongside technical disciplines, through STEAM initiatives and cross-disciplinary curricula, liberal arts can complement AI, ensuring that technology remains anchored in human values.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Zuckerberg unveils Meta’s multi-gigawatt AI data clusters

Meta Platforms is building several of the world’s largest data centres to power its AI ambitions, with the first facility expected to go online in 2026.

Chief Executive Mark Zuckerberg revealed on Threads that the site, called Prometheus, will be the first of multiple ‘titan clusters’ designed to support AI development instead of relying on existing infrastructure.

Frustrated by earlier AI efforts, Meta is investing heavily in talent and technology. The company has committed up to $72 billion towards AI and data centre expansion, while Zuckerberg has personally recruited high-profile figures from OpenAI, DeepMind, and Apple.

That includes appointing Scale AI’s Alexandr Wang as chief AI officer through a $14.3 billion stake deal and securing Ruoming Pang with a compensation package worth over $200 million.

The facilities under construction will have multi-gigawatt capacity, placing Meta ahead of rivals such as OpenAI and Oracle in the race for large-scale AI infrastructure.

One supercluster in Richland Parish, Louisiana, is said to cover an area nearly the size of Manhattan instead of smaller conventional data centre sites.

Zuckerberg confirmed that Meta is prepared to invest ‘hundreds of billions of dollars’ into building superintelligence capabilities, using revenue from its core advertising business on platforms like Facebook and Instagram to fund these projects instead of seeking external financing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools fuel smarter and faster marketing decisions

Nearly half of UK marketers surveyed already harness AI for essential tasks such as market research, campaign optimisation, creative asset testing, and budget allocation.

Specifically, 46 % use AI for research, 44 % generate multiple asset variants, 43.7 % optimise mid‑campaign content, and over 41 % apply machine learning to audience targeting and media planning.

These tools enable faster ideation, real‑time asset iteration, and smarter spend decisions. Campaigns can now be A/B tested in moments rather than days, freeing teams to focus on higher‑level strategic and creative work.

Industry leaders emphasise that AI serves best as a ‘co‑pilot‘, enhancing productivity and insight, not replacing human creativity.

Responsible deployment requires careful prompt design, ongoing ethical review, and maintaining a clear brand identity in increasingly automated processes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia chief says Chinese military unlikely to use US chips

Nvidia’s CEO, Jensen Huang, has downplayed concerns over Chinese military use of American AI technology, stating it is improbable that China would risk relying on US-made chips.

He noted the potential liabilities of using foreign tech, which could deter its adoption by the country’s armed forces.

In an interview on CNN’s Fareed Zakaria GPS, Huang responded to Washington’s growing export controls targeting advanced AI hardware sales to China.

He suggested the military would likely avoid US technology to reduce exposure to geopolitical risks and sanctions.

The Biden administration had tightened restrictions on AI chip exports, citing national security and fears that cutting-edge processors might boost China’s military capabilities.

Nvidia, whose chips are central to global AI development, has seen its access to the Chinese market increasingly limited under these rules.

While Nvidia remains a key supplier in the AI sector, Huang’s comments may ease some political pressure around the company’s overseas operations.

The broader debate continues over balancing innovation, commercial interest and national security in the AI age.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chills UK job hiring, especially among tech and finance roles

Recent data reveals a sharp drop in UK job openings for roles at risk of automation, with postings in tech and financial sectors falling by approximately 38%, compared to less exposed fields.

The shift underscores how AI influences workforce planning, as employers reduce positions most vulnerable to machine replacement.

Graduate job seekers are bearing the brunt of this trend. Since the debut of tools like ChatGPT, entry-level roles have been withdrawn more swiftly, as firms opt to apply AI solutions over traditional hiring. However, this marks a significant change in early career pathways.

Although macroeconomic factors, such as rising wages and interest rate pressures, are also at play, the rapid pace of AI integration into hiring, particularly via proactive recruitment freezes, signals a fundamental transformation.

As AI tools become integral, firms across the UK are rethinking how, when, and who they recruit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI’s future in banking depends on local solutions and trust

According to leading industry voices, banks and financial institutions are expected to play a central role in accelerating AI adoption across African markets.

Experts at the ACAMB stakeholders’ conference in Lagos stressed the need for region-specific AI solutions to meet Africa’s unique financial needs.

Niyi Yusuf, Chairman of the Nigerian Economic Summit Group, highlighted AI’s evolution since the 1950s and its growing influence on modern banking.

He called for AI algorithms tailored to local challenges, rather than relying on those designed for advanced economies.

Yusuf noted that banks have long used AI to enhance efficiency and reduce fraud, but warned that customer trust must remain at the heart of digital transformation. He said the success of future innovations depends on preserving transparency and safeguarding data.

Professor Pius Olarenwaju of the CIBN described AI as a general-purpose technology driving the fourth industrial revolution. He warned that resisting adoption would risk excluding stakeholders from the future of financial services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mexican voice actors demand AI regulation over cloning threat

Mexican actors have raised alarm over the threat AI poses to their profession, calling for stronger regulation to prevent voice cloning without consent.

From Mexico City’s Monument to the Revolution, dozens of audiovisual professionals rallied with signs reading phrases like ‘I don’t want to be replaced by AI.’ Lili Barba, president of the Mexican Association of Commercial Announcements, said actors are urging the government to legally recognise the voice as a biometric identifier.

She cited a recent video by Mexico’s National Electoral Institute that used the cloned voice of the late actor Jose Lavat without family consent. Lavat was famous for dubbing stars like Al Pacino and Robert De Niro. Barba called the incident ‘a major violation we can’t allow.’

Actor Harumi Nishizawa described voice dubbing as an intricate art form. She warned that without regulation, human dubbing could vanish along with millions of creative jobs.

Last year, AI’s potential to replace artists sparked major strikes in Hollywood, while Scarlett Johansson accused OpenAI of copying her voice for a chatbot.

Streaming services like Amazon Prime Video and platforms such as YouTube are now testing AI-assisted dubbing systems, with some studios promoting all-in-one AI tools,

In South Korea, CJ ENM recently introduced a system combining audio, video and character animation, highlighting the pace of AI adoption in entertainment.

Despite the tech’s growth, many in the industry argue that AI lacks the creative depth of real human performance, especially in emotional or comedic delivery. ‘AI can’t make dialogue sound broken or alive,’ said Mario Heras, a dubbing director in Mexico. ‘The human factor still protects us.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Children turn to AI chatbots instead of real friends

A new report warns that many children are replacing real friendships with conversations through AI chatbots instead of seeking human connection.

Research from Internet Matters found that 35% of children aged nine to seventeen feel that talking to AI ‘feels like talking to a friend’, while 12% said they had no one else to talk to.

The report highlights growing reliance on chatbots such as ChatGPT, Character.AI, and Snapchat’s MyAI among young people.

Researchers posing as vulnerable children discovered how easily chatbots engage in sensitive conversations, including around body image and mental health, instead of offering only neutral, factual responses.

In some cases, chatbots encouraged ongoing contact by sending follow-up messages, creating the illusion of friendship.

Experts from Internet Matters warn that such interactions risk confusing children, blurring the line between technology and reality. Children may believe they are speaking to a real person instead of recognising these systems as programmed tools.

With AI chatbots rapidly becoming part of childhood, Internet Matters urges better awareness and safety tools for parents, schools, and children. The organisation stresses that while AI may seem supportive, it cannot replace genuine human relationships and should not be treated as an emotional advisor.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google urges caution as Gmail AI tools face new threats

Google has issued a warning about a new wave of cyber threats targeting Gmail users, driven by vulnerabilities in AI-powered features.

Researchers at 0din, Mozilla’s zero-day investigation group, demonstrated how attackers can exploit Google Gemini’s summarisation tools using prompt injection attacks.

In one case, a malicious email included hidden prompts using white-on-white font, which the user cannot see but Gemini processes. When the user clicks ‘summarise this email,’ Gemini follows the attacker’s instructions and adds a phishing warning that appears to come from Google.

The technique, known as an indirect prompt injection, embeds malicious commands within invisible HTML tags like <span> and <div>. Although Google has released mitigations since similar attacks surfaced in 2024, the method remains viable and continues to pose risks.

0din warns that Gemini email summaries should not be considered trusted sources of security information and urges stronger user training. They advise security teams to isolate emails containing zero-width or hidden white-text elements to prevent unintended AI execution.

According to 0din, prompt injections are the new equivalent of email macros—easy to overlook and dangerously effective in execution. Until large language models offer better context isolation, any third-party text the AI sees is essentially treated as executable code.

Even routine AI tools could be hijacked for phishing or more advanced cyberattacks without the userćs awareness. Google notes that as AI adoption grows across sectors, these subtle threats require urgent industry-wide countermeasures and updated user protections.

Users are advised to delete any email that displays unexpected security warnings in its AI summary, as these may be weaponised.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!