NotebookLM gains automated Deep Research tool and wider file support

Google is expanding NotebookLM with Deep Research, a tool designed to handle complex online inquiries and produce structured, source-grounded reports. The feature acts like a dedicated researcher, planning its own process and gathering material across the web.

Users can enter a question, choose a research style, and let Deep Research browse relevant sites before generating a detailed briefing. The tool runs in the background, allowing additional sources to be added without disrupting the workflow or leaving the notebook.

NotebookLM now supports more file types, including Google Sheets, Drive URLs, PDFs stored in Drive, and Microsoft Word documents. Google says this enables tasks such as summarising spreadsheets and quickly importing multiple Drive files for analysis.

The update continues the service’s gradual expansion since its late-2023 launch, which has brought features such as Video Overviews for turning dense materials into visual explainers. These follow earlier additions, such as Audio Overviews, which create podcast-style summaries of shared documents.

Google also released NotebookLM apps for Android and iOS earlier this year, extending access beyond desktop. The company says the latest enhancements should reach all users within a week.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China targets deepfake livestreams of public figures

Chinese cyberspace authorities announced a crackdown on AI deepfakes impersonating public figures in livestream shopping. Regulators said platforms have removed thousands of posts and sanctioned numerous accounts for misleading users.

Officials urged platforms to conduct cleanups and hold marketers accountable for deceptive promotions. Reported actions include removing over 8,700 items and dealing with more than 11,000 impersonation accounts.

Measures build on wider campaigns against AI misuse, including rules targeting deep synthesis and labelling obligations. Earlier efforts focused on curbing rumours, impersonation and harmful content across short videos and e-commerce.

Chinese authorities pledged a continued high-pressure stance to safeguard consumers and protect celebrity likenesses online. Platforms risk penalties if complaint handling and takedowns fail to deter repeat infringements in livestream commerce.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New guidelines by Apple curb how apps send user data to external AI systems

Apple has updated its App Review Guidelines to require developers to disclose and obtain permission before sharing personal data with third-party AI systems. The company says the change enhances user control as AI features become more prevalent across apps.

The revision arrives ahead of Apple’s planned 2026 release of an AI-enhanced Siri, expected to take actions across apps and rely partly on Google’s Gemini technology. Apple is also moving to ensure external developers do not pass personal data to AI providers without explicit consent.

Previously, rule 5.1.2(i) already limited the sharing of personal information without permission. The update adds explicit language naming third-party AI as a category that requires disclosure, reflecting growing scrutiny of how apps use machine learning and generative models.

The shift could affect developers who use external AI systems for features such as personalisation or content generation. Enforcement details remain unclear, as the term ‘AI’ encompasses a broad range of technologies beyond large language models.

Apple released several other guideline updates alongside the AI change, including support for its new Mini Apps Programme and amendments involving creator tools, loan products, and regulated services such as crypto exchanges.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn introduces AI-powered people search for faster networking

LinkedIn has launched an AI-powered people search feature, allowing users to find relevant professionals using plain language instead of traditional keywords and filters. The new tool surfaces experts based on experience and skills rather than exact job titles or company names.

The feature uses advanced AI and LinkedIn’s professional data to match users with the right people at the right time. It transforms connections into actionable opportunities, helping members discover mentors, collaborators, or industry specialists more efficiently.

Previously, searches required highly specific information, making it difficult to identify the right professional. The new conversational approach simplifies the process, making LinkedIn a more intuitive and powerful platform for networking, career planning, and business growth.

AI-powered people search is currently available to Premium subscribers in the US, with plans for expansion in the coming months. LinkedIn plans to expand the feature globally, helping professionals connect, collaborate, and find opportunities more quickly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI drives a new identity security crisis

New research from Rubrik Zero Labs warns that agentic AI is reshaping the identity landscape faster than organisations can secure it.

The study reveals a surge in non-human identities created through automation and API driven workflows, with numbers now exceeding human users by a striking margin.

Most firms have already introduced AI agents into their identity systems or plan to do so, yet many struggle to govern the growing volume of machine credentials.

Experts argue that identity has become the primary attack surface as remote work, cloud adoption and AI expansion remove traditional boundaries. Threat actors increasingly rely on valid credentials instead of technical exploits, which makes weaknesses in identity governance far more damaging.

Rubrik’s researchers and external analysts agree that a single compromised key or forgotten agent account can provide broad access to sensitive environments.

Industry specialists highlight that agentic AI disrupts established IAM practices by blurring distinctions between human and machine activity.

Organisations often cannot determine whether a human or an automated agent performed a critical action, which undermines incident investigations and weakens zero-trust strategies. Poor logging, weak lifecycle controls and abandoned machine identities further expand the attack surface.

Rubrik argues that identity resilience is becoming essential, since IAM tools alone cannot restore trust after a breach. Many firms have already switched IAM providers, reflecting widespread dissatisfaction with current safeguards.

Analysts recommend tighter control of agent creation, stronger credential governance and a clearer understanding of how AI-driven identities reshape operational and security risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU investigates Google over potential Digital Markets Act breach

The European Commission has opened an investigation into whether Google may be breaching the Digital Markets Act by unfairly demoting news publishers in search results.

An inquiry that centres on Google’s ‘site reputation abuse policy’, which appears to lower rankings for publishers that host content from commercial partners, even when those partnerships support legitimate ways of monetising online journalism.

The Commission is examining whether Alphabet’s approach restricts publishers from conducting business, innovating, and cooperating with third-party content providers. Officials highlighted concerns that such demotions may undermine revenue at a difficult moment for the media sector.

These proceedings do not imply a final decision; instead, they allow the EU to gather evidence and assess Google’s practices in detail.

If the Commission finds evidence of non-compliance, it will present preliminary findings and request corrective measures. The investigation is expected to conclude within 12 months.

Under the DMA, infringements can lead to fines of up to ten percent of a company’s worldwide turnover, rising to twenty percent for repeated violations, alongside possible structural remedies.

Senior Commissioners stressed that gatekeepers must offer fair and non-discriminatory access to their platforms. They argued that protecting publishers’ ability to reach audiences supports media pluralism, innovation, and democratic resilience.

Google Search, designated as a core platform service under the DMA, has been required to comply fully with the regulation since March 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New York Times lawsuit prompts OpenAI to strengthen privacy protections

OpenAI says a New York Times demand to hand over 20 million private ChatGPT conversations threatens user privacy and breaks with established security norms. The request forms part of the Times’ lawsuit over alleged misuse of its content.

The company argues the demand would expose highly personal chats from people with no link to the case. It previously resisted broader requests, including one seeking more than a billion conversations, and says the latest move raises similar concerns about proportionality.

OpenAI says it offered privacy-preserving alternatives, such as targeted searches and high-level usage data, but these were rejected. It adds that chats covered by the order are being de-identified and stored in a secure, legally restricted environment.

The dispute arises as OpenAI accelerates its security roadmap, which includes plans for client-side encryption and automated systems that detect serious safety risks without requiring broad human access. These measures aim to ensure private conversations remain inaccessible to external parties.

OpenAI maintains that strong privacy protections are essential as AI tools handle increasingly sensitive tasks. It says it will challenge any attempt to make private conversations public and will continue to update users as the legal process unfolds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI credentials grow as AWS launches practical training pathway

AWS is launching four solutions to help close the AI skills gap as demand rises and job requirements shift. The company positions the new tools as a comprehensive learning journey, offering structured pathways that progress from foundational knowledge to hands-on practice and formal validation.

AWS Skill Builder now hosts over 220 free AI courses, ranging from beginner introductions to advanced topics in generative and agentic AI. The platform enables learners to build skills at their own pace, with flexible study options that accommodate work schedules.

Practical experience anchors the new suite. The Meeting Simulator helps learners explain AI concepts to realistic personas and refine communication with instant feedback. Cohorts Studio offers team-based training through study groups, boot camps, and game-based challenges.

AWS is expanding its credential portfolio with the AWS Certified Generative AI Developer – Professional certification. The exam helps cloud practitioners demonstrate proficiency in foundation models, RAG architectures, and responsible deployment, supported by practice tasks and simulated environments.

Learners can validate hands-on capability through new microcredentials that require troubleshooting and implementation in real AWS settings. Combined credentials signal both conceptual understanding and task-ready skills, with Skill Builder’s more expansive library offering a clear starting point for career progression.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Irish regulator opens DSA probe into X

Ireland’s media watchdog has opened a formal investigation into X under the EU’s Digital Services Act. Regulators will assess appeal rights and internal complaint handling after reports of inaccessible processes for users.

Irish officials will examine whether users can challenge refusals to remove reported content and receive clear outcomes. Potential penalties reach up to 6% of global turnover for confirmed breaches.

The case stems from ongoing supervision, a user complaint, and information from HateAid, marking the first such probe by Ireland. Wider EU scrutiny continues across huge platforms.

Other services, including Meta and TikTok, have faced DSA actions, underscoring tighter enforcement across the bloc. Remedial measures and transparency improvements could follow if non-compliance is found.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

European Commission launches Culture Compass to strengthen the EU identity

The European Commission unveiled the Culture Compass for Europe, a framework designed to place culture at the heart of the EU policies.

An initiative that aims to foster the identity ot the EU, celebrate diversity, and support excellence across the continent’s cultural and creative sectors.

The Compass addresses the challenges facing cultural industries, including restrictions on artistic expression, precarious working conditions for artists, unequal access to culture, and the transformative impact of AI.

It provides guidance along four key directions: upholding European values and cultural rights, empowering artists and professionals, enhancing competitiveness and social cohesion, and strengthening international cultural partnerships.

Several initiatives will support the Compass, including the EU Artists Charter for fair working conditions, a European Prize for Performing Arts, a Youth Cultural Ambassadors Network, a cultural data hub, and an AI strategy for the cultural sector.

The Commission will track progress through a new report on the State of Culture in the EU and seeks a Joint Declaration with the European Parliament and Council to reinforce political commitment.

Commission officials emphasised that the Culture Compass connects culture to Europe’s future, placing artists and creativity at the centre of policy and ensuring the sector contributes to social, economic, and international engagement.

Culture is portrayed not as a side story, but as the story of the EU itself.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!