YouTube under fire for AI video edits without creator consent

Anger grows as YouTube secretly alters some uploaded videos using machine learning. The company admitted that it had been experimenting with automated edits, which sharpen images, smooth skin, and enhance clarity, without notifying creators.

Although tools like ChatGPT or Gemini did not generate these changes, they still relied on AI.

The issue has sparked concern among creators, who argue that the lack of consent undermines trust.

YouTuber Rhett Shull publicly criticised the platform, prompting YouTube liaison Rene Ritchie to clarify that the edits were simply efforts to ‘unblur and denoise’ footage, similar to smartphone processing.

However, creators emphasise that the difference lies in transparency, since phone users know when enhancements are applied, whereas YouTube users were unaware.

Consent remains central to debates around AI adoption, especially as regulation lags and governments push companies to expand their use of the technology.

Critics warn that even minor, automatic edits can treat user videos as training material without permission, raising broader concerns about control and ownership on digital platforms.

YouTube has not confirmed whether the experiment will expand or when it might end.

For now, viewers noticing oddly upscaled Shorts may be seeing the outcome of these hidden edits, which have only fuelled anger about how AI is being introduced into creative spaces.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI accuses Apple and OpenAI of blocking competition in AI

Elon Musk’s xAI has filed a lawsuit in Texas accusing Apple and OpenAI of colluding to stifle competition in the AI sector.

The case alleges that both companies locked up markets to maintain monopolies, making it harder for rivals like X and xAI to compete.

The dispute follows Apple’s 2024 deal with OpenAI to integrate ChatGPT into Siri and other apps on its devices. According to the lawsuit, Apple’s exclusive partnership with OpenAI has prevented fair treatment of Musk’s products within the App Store, including the X app and xAI’s Grok app.

Musk previously threatened legal action against Apple over antitrust concerns, citing the company’s alleged preference for ChatGPT.

Musk, who acquired his social media platform X in a $45 billion all-stock deal earlier in the year, is seeking billions of dollars in damages and a jury trial. The legal action highlights Musk’s ongoing feud with OpenAI’s CEO, Sam Altman.

Musk, a co-founder of OpenAI who left in 2018 after disagreements with Altman, has repeatedly criticised the company’s shift to a profit-driven model. He is also pursuing separate litigation against OpenAI and Altman over that transition in California.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Honor and Google deepen platform partnership with longer updates and AI integration

Honor has announced a joint commitment with Google to strengthen its Android platform support. The company now guarantees six years of Android OS and security updates for its upcoming Honor 400 series, aligning with similar practices by Pixel and Samsung devices.

This update period is part of Honor’s wider Alpha Plan, a strategic framework positioning the company as an AI device ecosystem player.

Honor will invest US $10 billion over five years to support this transformation through hardware innovation, software longevity and AI agent integration.

The partnership enables deeper cooperation with Google around Android updates and AI features. Honor already integrates tools like Circle to Search, AI photo expansion and Gemini voice assistants on its Magic series. The extended software support promises longer device lifespans, reduced e-waste and improved user experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gmail accounts targeted in phishing wave after Google data leak

Hackers linked to the ShinyHunters group have compromised Google’s Salesforce systems, leading to a data leak that puts Gmail and Google Cloud users at risk of phishing attacks.

Google confirmed that customer and company names were exposed, though no passwords were stolen. Attackers are now exploiting the breach with phishing schemes, including fake account resets and malware injection attempts through outdated access points.

With Gmail and Google Cloud serving around 2.5 billion users worldwide, both companies and individuals could be targeted. Early reports on Reddit describe callers posing as Google staff warning of supposed account breaches.

Google urges users to strengthen protections by running its Security Checkup, enabling Advanced Protection, and switching to passkeys instead of passwords. The company emphasised that its staff never initiates unsolicited password resets by phone or email.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI could democratise higher education if implemented responsibly

Professor Orla Sheils of Trinity College Dublin calls on universities to embrace AI as a tool for educational equity rather than fear. She notes that AI is already ubiquitous in higher education, with students, lecturers, and researchers using it daily.

AI can help universities fulfil the democratic ideals of the Bologna Process and Ireland’s National AI Strategy by expanding lifelong learning, making education more accessible and supporting personalised student experiences.

Initiatives such as AI-driven tutoring, automated transcription and translation, streamlined timetabling and grading tools can free staff time while supporting learners with challenging schedules or disabilities.

Trinity’s AI Accountability Lab, led by Dr Abeba Birhane, exemplifies how institutions can blend innovation with ethics. Sheils warns that overreliance on AI risks academic integrity and privacy unless governed carefully. AI must serve educators, not replace them, preserving the human qualities of creativity and judgement in learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Cloud’s new AI tools expand enterprise threat protection

Following last week’s announcements on AI-driven cybersecurity, Google Cloud has unveiled further tools at its Security Summit 2025 aimed at protecting enterprise AI deployments and boosting efficiency for security teams.

The updates build on prior innovations instead of replacing them, reinforcing Google’s strategy of integrating AI directly into security operations.

Vice President and General Manager Jon Ramsey highlighted the growing importance of agentic approaches as AI agents operate across increasingly complex enterprise environments.

Building on the previous rollout, Google now introduces Model Armor protections, designed to shield AI agents from prompt injections, jailbreaking, and data leakage, enhancing safeguards without interrupting existing workflows.

Additional enhancements include the Alert Investigation agent, which automates event enrichment and analysis while offering actionable recommendations.

By combining Mandiant threat intelligence feeds with Google’s Gemini AI, organisations can now detect and respond to incidents across distributed agent networks more rapidly and efficiently than before.

SecOps Labs and updated SOAR dashboards provide early access to AI-powered threat detection experiments and comprehensive visualisations of security operations.

These tools allow teams to continue scaling agentic AI security, turning previous insights into proactive, enterprise-ready protections for real-world deployments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Seemingly conscious AI may cause psychological problems and AI psychosis

Microsoft’s AI chief and DeepMind co-founder, Mustafa Suleyman, has warned that society is unprepared for AI systems that convincingly mimic human consciousness. He warned that ‘seemingly conscious’ AI could make the public treat machines as sentient.

Suleyman highlighted potential risks including demands for AI rights, welfare, and even AI citizenship. Since the launch of ChatGPT in 2022, AI developers have increasingly designed systems to act ‘more human’.

Experts caution that such technology could intensify mental health problems and distort perceptions of reality. The phenomenon known as AI Psychosis sees users forming intense emotional attachments or believing AI to be conscious or divine.

Suleyman called for clear boundaries in AI development, emphasising that these systems should be tools for people rather than digital persons. He urged careful management of human-AI interaction without calling for a halt to innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Global tech competition intensifies as the UK outlines a £1 trillion digital blueprint

The United Kingdom has unveiled a strategy to grow its digital economy to £1 trillion by harnessing AI, quantum computing, and cybersecurity. The plan emphasises public-private partnerships, training, and international collaboration to tackle skills shortages and infrastructure gaps.

The initiative builds on the UK tech sector’s £1.2 trillion valuation, with regional hubs in cities such as Bristol and Manchester fuelling expansion in emerging technologies. Experts, however, warn that outdated systems and talent deficits could stall progress unless workforce development accelerates.

AI is central to the plan, with applications spanning healthcare and finance. Quantum computing also features, with investments in research and cybersecurity aimed at strengthening resilience against supply disruptions and future threats.

The government highlights sustainability as a priority, promoting renewable energy and circular economies to ensure digital growth aligns with environmental goals. Regional investment in blockchain, agri-tech, and micro-factories is expected to create jobs and diversify innovation-driven growth.

By pursuing these initiatives, the UK aims to establish itself as a leading global tech player alongside the US and China. Ethical frameworks and adaptive strategies will be key to maintaining public trust and competitiveness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia weighs cyber militia to counter rising digital threats

Cyberattacks are intensifying worldwide, with Australia now ranked fourth globally for threats against operational technology and industrial sectors. Rising AI-powered incursions have exposed serious vulnerabilities in the country’s national defence and critical infrastructure.

The 2023–2030 Cyber Security Strategy designed by the Government of Australia aims to strengthen resilience through six ‘cyber shields’, including legislation and intelligence sharing. But a skills shortage leaves organisations vulnerable as ransomware attacks on mining and manufacturing continue to rise.

One proposal gaining traction is the creation of a volunteer ‘cyber militia’. Inspired by the cyber defence unit in Estonia, this network would mobilise unconventional talent, retirees, hobbyist hackers, and students, to bolster monitoring, threat hunting, and incident response.

Supporters argue that such a force could fill gaps left by formal recruitment, particularly in smaller firms and rural networks. Critics, however, warn of vetting risks, insider threats, and the need for new legal frameworks to govern liability and training.

Pilot schemes in high-risk sectors, such as energy and finance, have been proposed, with public-private funding viewed as crucial. Advocates argue that a cyber militia could democratise security and foster collective responsibility, aligning with the country’s long-term cybersecurity strategy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-designed proteins could transform longevity and drug development

OpenAI has launched GPT-4b micro, an AI model developed with longevity startup Retro Biosciences to accelerate protein engineering. Unlike chatbots, it focuses on biological sequences and 3D structures.

The model redesigned two Yamanaka factors- proteins that convert adult cells into stem cells, showing 50-fold higher efficiency in lab tests and improved DNA repair. Older cells acted more youthful, potentially shortening trial-and-error in regenerative medicine.

AI-designed proteins could speed up drug development and allow longevity startups to rejuvenate cells safely and consistently. The work also opens new possibilities in synthetic biology beyond natural evolution.

OpenAI emphasised that the research is still early and lab-based, with clinical applications requiring caution. Transparency is key, as the technology’s power to design potent proteins quickly raises biosecurity considerations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!