Google has added a new dimension to NotebookLM by introducing Video Overviews, a feature that transforms your content into narrated slide presentations.
Originally revealed at Google I/O, the tool builds on the popularity of Audio Overviews, which generated AI-hosted podcast-style summaries. Instead of relying solely on audio, users can now enjoy visual storytelling powered by the same AI.
Video Overviews automatically pulls elements like images, diagrams, quotes and statistics from documents to create slide-based summaries.
The tool supports professionals and students by simplifying complex reports or academic papers into engaging visual formats. Users can also customise the video output by defining learning goals, selecting key topics, or tailoring it to a specific audience.
For now, the rollout is limited to English-speaking users on desktops, but Google plans to expand the formats. Narrated slides are the first to launch, combining clear visuals with spoken summaries, helping visual learners engage with content more effectively instead of reading lengthy text.
Alongside the new feature, Google has redesigned the NotebookLM Studio interface. Users can now generate and store multiple outputs—Audio Overviews, Reports, Study Guides, or Mind Maps—all within a single notebook.
The update also allows users to interact with different tools simultaneously, such as listening to an AI podcast while reviewing a study guide, offering a more integrated and versatile learning experience.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In the digital world, tracking occurs through digital signals sent from one computer to a server, and from a server to an organisation. Almost immediately, a profile of a user can be created. The information can be leveraged to send personalised advertisements for products and services consumers are interested in, but it can also classify people into categories to send them advertisements to steer them in a certain direction, for example, politically (2024 Romanian election, Cambridge Analytica Scandal skewing the 2016 Brexit referendum and 2016 US Elections).
Digital tracking can be carried out with minimal costs, rapid execution and the capacity to reach hundreds of thousands of users simultaneously. These methods require either technical skills (such as coding) or access to platforms that automate tracking.
This phenomenon has been well documented and likened to George Orwell’s 1984, in which the people of Oceania are subject to constant surveillance by ‘Big Brother’ and institutions of control; the Ministry of Truth (propaganda), Peace (military control), Love (torture and forced loyalty) and Plenty (manufactured prosperity).
A related concept is the Panopticon, developed by the French philosopher Michel Foucault’s social theory based on the architecture of a prison, enabling constant observation from a central point. Prisoners never know if they are being watched and thus self-regulate their behaviour. In today’s tech-driven society, our digital behaviour is similarly regulated through the persistent possibility of surveillance.
How are we tracked? The case of cookies and device fingerprinting
Cookies
Cookies are small, unique text files placed on a user’s device by their web browser at the request of a website. When a user visits a website, the server can instruct the browser to create or update a cookie. These cookies are then sent back to the server with each subsequent request to the same website, allowing the server to recognise and remember certain information (login status, preferences, or tracking data).
If a user visits multiple websites about a specific topic, that pattern can be collected and sold to advertisers targeting that interest. This applies to all forms of advertising, not just commercial but also political and ideological influence.
Device fingerprinting
Device fingerprinting involves generating a unique identifier using a device’s hardware and software characteristics. Types include browser fingerprinting, mobile fingerprinting, desktop fingerprinting, and cross-device tracking. To assess how unique a browser is, users can test their setup via the Cover Your Tracks tool by the Electronic Frontier Foundation.
Different information will be collected, such as your operating system, language version, keyboard settings, screen resolution, font used, device make and model and more. The more data points collected, the more unique an individual’s device will be.
A common reason to use device fingerprinting is for advertising. Since each individual has a unique identifier, advertisers can distinguish individuals from one another and see which websites they visit based on past collected data.
Similar to cookies, device fingerprinting is not purely about advertising, as it has some legitimate security purposes. Device fingerprinting, as it creates a unique ID of a device, allows websites to recognise a user’s device. This is useful to combat fraud. For instance, if a known device suddenly logs in from an unknown fingerprint, fraud detection mechanisms may flag and block the login attempt.
Legal considerations
Apart from societal impacts, there are legal considerations to be made, specifically concerning fundamental rights. In the EU and Europe, Articles 7 and 8 of the Charter of Fundamental Rights and Article 8 of the European Convention on Human Rights are what give rise to the protection of personal data in the first place. They form the legal bedrock of digital privacy legislation, such as the GDPR and the ePrivacy Directive. Stemming from the GDPR, there is a protection against unlawful, unfair and opaque processing of personal data.
Articles 7 and 8 of the Charter of Fundamental Rights
For tracking to be carried out lawfully, one of the six legal bases of the GDPR must be relied upon. In this case, tracking is usually only lawful if the legal basis of consent is relied upon (Article 6(1)(a) GDPR, which stems from Article 5(1) of the ePrivacy Directive).
Other legal bases, such as the legitimate interest of a business, may allow for limited analytical cookies to be placed, of which the cookies referred to in this analysis are not.
Regardless of this, to obtain consent, website visitors must ensure that consent is collected prior to processing occurring, freely given, specific, informed and unambiguous. In most cases of website tracking, consent is not collected prior to processing.
In practice, this means that before a consent request is fulfilled by a website visitor, cookies are placed on the user’s device. There are additional concerns about consent not being informed, as users do not know what processing personal data to enable tracking entails.
Moreover, consent is not specific to what is necessary to the processing, given that processing occurs for broad and unspecified reasons, such as improving visitor experience and understanding the website better, and those explanations are generic and broad.
Further, tracking is typically unfair as users do not expect to be tracked across sites or have digital profiles made about themselves based on website visits. Tracking is also opaque, as users do not understand how tracking occurs. Website owners state that tracking occurs with a lack of explanation on how it occurs in the first place. Users do not know for how long it occurs, what personal data is being used to track or how it benefits website owners.
Can we refuse tracking
In theory, it is possible to prevent tracking from the get-go. This can be done by refusing to give consent when tracking occurs. However, in practice, refusing consent can still lead to tracking. Outlined below are two concrete examples of this happening daily.
Cookies
Regarding cookies, simply put, the refusal of all requests is not honoured, it is ignored. Studies have found that when a user visits a website and refuses to give consent, their request is not honoured. Cookies and similar tracking technologies are placed on the user’s device as if they had accepted cookies.
This increases user frustration as they are given a choice that is non-existent. This occurs as non-essential cookies, which can be refused, are lumped together with essential cookies, which cannot be refused. Therefore, when refusing consent to non-essential cookies, not all are refused, as some are mislabelled.
Another reason for this occurrence is that cookies are placed before consent is sought. Often, website owners outsource cookie banner compliance to more experienced companies. These websites use consent management platforms (CMPs) such as Cookiebot by Usercentrics or One Trust.
When verifying when cookies are placed via these CMPs, the option to load cookies after consent is sought needs to be manually selected. Therefore, website owners need to have knowledge about consent requirements to understand that cookies are not to be placed prior to consent being sought.
Another example is related to Google Consent Mode (GCM). GCM is relevant to mention here as Google is the most common third-party tracker on the web, thus the most likely tracker users will encounter. They have a vast array of trackers ranging from statistics, analytics, preferences, marketing and more. GCM essentially creates a path for website analytics to occur despite consent being refused. This occurs as GCM claims that it can send cookieless ping signals to user devices to know how many users have viewed a website, clicked on a page, searched a term, etc.
This is a novel solution Google is presenting, and it claims to be privacy-friendly, as no cookies are required for this to occur. However, a study on tags, specifically GCM tags, found that GCM is not privacy-friendly and infringes the GDPR. The study found that Google still collects personal data in these ‘cookieless ping signals’ such as user language, screen resolution, computer architecture, user agent string, operating system and its version, complete web page URL and search keywords. Since this data is collected and processed despite the user refusing consent, there are undoubtedly legal issues.
The first reason comes from the lawfulness general principle whereby Google has no lawful basis to process this personal data as the user refused consent, and no other legal basis is used. The second reason stems from the general principle of fairness, as users do not expect that, after refusing trackers and choosing the more privacy-friendly option, their data is still processed as if their consent choice did not matter.
Therefore, from Google’s perspective, GCM is privacy-friendly as no cookies are placed, thus no consent is required to be sought. However, a recent study revealed that personal data is still being processed without any permission or legal basis.
What next?
On an individual level:
Many solutions have been developed for individuals to reduce the tracking they are subject to. From browser extensions to using devices that are more privacy-friendly and using ad blockers. One notable company tackling this issue is Duck Duck Go, which by default rejects trackers, allows for email protection, and overall reduces trackers when using their browser. Duck Duck Go is not the only company to allow this, many more, such as uBlock Origin and Ghostery, offer similar services.
Specifically, regarding fingerprint ID, researchers have developed ways to prevent device fingerprinting. In 2023, researchers proposed ShieldF, which is a Chromium add-on that reduces fingerprinting for mobile apps and browsers. Other measures include using an IP address that many people use, which is not ideal for home Wi-Fi. Using a combination of a browser extension and a VPN is also unsuitable for every individual, as this demands a substantial amount of effort and sometimes financial costs.
On a systemic level:
CMPs and GCM are active tracking stakeholders in the tracking ecosystem, and their actions are subject to enforcement bodies. In this case, predominantly data protection authorities (DPA). One prominent DPA working on cookie enforcement is the Dutch DPA, the Autoriteit Persoonsgegevens (AP). In the early months of 2025, the AP has publicly stated that its focus for this upcoming year will be to check cookie compliance. They announced that they would be investigating 10,000 websites in the Netherlands. This has led to investigations into companies with unlawful cookie banners, concluding with warnings and sanctions.
However, these investigations require extensive time and effort. DPAs have already stated that they are overworked and do not have enough personnel or financial resources to cope with the increase in responsibility. Coupled with the fact that sanctioned companies set aside financial pots for these sanctions, or that non-EU businesses do not comply with DPA sanction decisions (the case of Clearview AI). Different ways to tackle non-compliance should be investigated.
For example, in light of the GDPR simplification package, whilst simplifying some measures, other liability measures could be introduced to ensure that enforcement is as vigorous as the legislation itself. The EU has not shied away from holding management boards liable for non-compliance. In a separate legislation on cybersecurity, NIS II Article 20(1) states that ‘management bodies of essential and important entities approve the cybersecurity risk-management measures (…) can be held liable for infringements (…)’. That article allows for board member liability for specific cybersecurity risk-management measures in Article 21. If similar measures cannot be introduced during this time, other moments of amendment can be consulted for this.
Conclusion
Cookies and device fingerprinting are two common ways in which tracking occurs. The potential larger societal and legal consequences of tracking demand that existing robust legislation is enforced to ensure that past politically related historical mistakes are not repeated.
Ultimately, there is no way to completely prevent fingerprinting and cookie-based tracking without significantly compromising the user’s browsing experience. For this reason, the burden of responsibility must shift toward CMPs. This shift should begin with the implementation of privacy-by-design and privacy-by-default principles in the development of their tools (preventing cookie placement prior to consent seeking).
Accountability should occur through tangible consequences, such as liability for board members in cases of negligence. By attributing responsibility to the companies which develop cookie banners and facilitate trackers, the source of the problem can be addressed and held accountable for their human rights violations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
India’s Income Tax Department is using AI and data tools to identify tax evasion in cryptocurrency transactions. The government collected ₹437 crore in crypto taxes in 2022-2023 using machine learning and digital forensics to spot suspicious activity.
Tax authorities match deducted at source (TDS) data from crypto exchanges to improve compliance. The introduction of the Crypto-Asset Reporting Framework (CARF) also enables automated sharing of tax information, aligning India’s efforts with international tax agreements.
These moves mark a push for greater transparency in India’s digital asset market. Enhanced wallet visibility and automatic data exchange aim to reduce anonymity and curb tax evasion in the crypto space.
India continues to develop regulations focused on consumer protection, cross-border cooperation, and tax compliance, demonstrating a commitment to a more traceable and accountable crypto industry.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Altman also expressed concerns about the potential misuse of AI, such as using voice cloning for fraud and identity theft. He emphasised the need for stronger privacy protections for sensitive conversations with AI tools like ChatGPT, noting that current standards are inadequate and should align with those for therapists.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI is quickly transforming the music industry, with AI-generated bands now drawing millions of plays on platforms like Spotify.
While these acts may sound like traditional musicians, they are entirely digital creations. Streaming services rarely label AI music clearly, and the producers behind these tracks often remain anonymous and unreachable. Human artists, meanwhile, are quietly watching their workload dry up.
Music professionals are beginning to express concern. Composer Leo Sidran believes AI is already taking work away from creators like him, noting that many former clients now rely on AI-generated solutions instead of original compositions.
Unlike previous tech innovations, which empowered musicians, AI risks erasing job opportunities entirely, according to Berklee College of Music professor George Howard, who warns it could become a zero-sum game.
AI music is especially popular for passive listening—background tracks for everyday life. In contrast, real musicians still hold value among fans who engage more actively with music.
However, AI is cheap, fast, and royalty-free, making it attractive to publishers and advertisers. From film soundtracks to playlists filled with faceless artists, synthetic sound is rapidly replacing human creativity in many commercial spaces.
Experts urge musicians to double down on what makes them unique instead of mimicking trends that AI can easily replicate. Live performance remains one of the few areas where AI has yet to gain traction. Until synthetic bands take the stage, artists may still find refuge in concerts and personal connection with fans.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
ChatGPT has become an everyday tool for many, serving as a homework partner, a research aid, and even a comforting listener. But questions are beginning to emerge about the emotional bonds users form with it. A recent LinkedIn post has reignited the debate around AI overuse.
Simrann M Bhambani, a marketing professional at Flipkart, publicly shared her decision to delete ChatGPT from her devices. In a post titled ‘ChatGPT is TOXIC! (for me)’, she described how casual interaction escalated into emotional dependence. The platform began to resemble a digital therapist.
Bhambani admitted to confiding every minor frustration and emotional spiral to the chatbot. Its constant availability and non-judgemental replies gave her a false sense of security. Even with supportive friends, she felt drawn to the machine’s quiet reliability.
What began as curiosity turned into compulsion. She found herself spending hours feeding the bot intrusive thoughts and endless questions. ‘I gave my energy to something that wasn’t even real,’ she wrote. The experience led to more confusion instead of clarity.
Rather than offering mental relief, the chatbot fuelled her overthinking. The emotional noise grew louder, eventually becoming overwhelming. She realised that the problem wasn’t the technology itself, but how it quietly replaced self-reflection.
Deleting the app marked a turning point. Bhambani described the decision as a way to reclaim mental space and reduce digital clutter. She warned others that AI tools, while useful, can easily replace human habits and emotional processing if left unchecked.
Many users may not notice such patterns until they are deeply entrenched. AI chatbots are designed to be helpful and responsive, but they lack the nuance and care of human conversation. Their steady presence can foster a deceptive sense of intimacy.
People increasingly rely on digital tools to navigate their daily emotions, often without understanding the consequences. Some may find themselves withdrawing from human relationships or journalling less often. Emotional outsourcing to machines can significantly change how people process personal experiences.
Industry experts have warned about the risks of emotional reliance on generative AI. Chatbots are known to produce inaccurate or hallucinated responses, especially when asked to provide personal advice. Sole dependence on such tools can lead to misinformation or emotional confusion.
Companies like OpenAI have stressed that ChatGPT is not a substitute for professional mental health support. While the bot is trained to provide helpful and empathetic responses, it cannot replace human judgement or real-world relationships. Boundaries are essential.
Mental health professionals also caution against using AI as an emotional crutch. Reflection and self-awareness take time and require discomfort, which AI often smooths over. The convenience can dull long-term growth and self-understanding.
Bhambani’s story has resonated with many who have quietly developed similar habits. Her openness has sparked important discussions on emotional hygiene in the age of AI. More users are starting to reflect on their relationship with digital tools.
Social media platforms are also witnessing an increased number of posts about AI fatigue and cognitive overload. People are beginning to question how constant access to information and feedback affects emotional well-being. There is growing awareness around the need for balance.
AI is expected to become even more integrated into daily life, from virtual assistants to therapy bots. Recognising the line between convenience and dependency will be key. Tools are meant to serve, not dominate, personal reflection.
Developers and users alike must remain mindful of how often and why they turn to AI. Chatbots can complement human support systems, but they are not replacements. Bhambani’s experience serves as a cautionary tale in the age of machine intimacy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has officially introduced its AI Mode to UK users, calling it the most advanced version of its search engine.
Instead of listing web links, the feature provides direct, human-like answers to queries. It allows users to follow up with more detailed questions or multimedia inputs such as voice and images. The update aims to keep pace with the rising trend of longer, more conversational search phrases.
The tool first launched in the US and uses a ‘query fan-out’ method, breaking down complex questions into multiple search threads to create a combined answer from different sources.
While Google claims this will result in more meaningful site visits, marketers and publishers are worried about a growing trend known as ‘zero-click searches’, where users find what they need without clicking external links.
Research already shows a steep drop in engagement. Data from the Pew Research Centre reveals that only 8% of users click a link when AI summaries are present, nearly half the rate of traditional search pages. Experts warn that without adjusting strategies, many online brands risk becoming invisible.
Instead of relying solely on classic SEO tactics, businesses are being urged to adopt Generative Engine Optimisation. Using tools like schema markup, GEO focuses on conversational content, visual media, and context-aware formatting.
With nearly half of UK users engaging with AI search daily, adapting to these shifts may prove essential for maintaining visibility and sales.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft has launched Copilot Mode in its Edge browser, adding AI features to streamline online activity.
Instead of switching between tabs or manually comparing information, users can ask Copilot to complete tasks, search for content, and make suggestions. The tool is available for PC and Mac users and opens in a side panel, letting people interact with it while still viewing the original page.
Copilot can help with everyday tasks such as writing content, preparing grocery lists, and scheduling appointments. It works across multiple tabs if the user permits, enabling comparisons like hotel or flight prices in a single command.
Voice input is also supported, making it easier for those with limited mobility or less familiarity with AI tools to interact naturally.
Microsoft notes that Copilot Mode remains experimental, but users can still set it as the default. It supports conversational prompts, dynamic interactions like turning recipes vegan, and even measurements or language translations, all without losing browser position.
Users may eventually provide login or history access for more advanced tasks, although full consent and clear notifications will be required.
With growing reliance on digital assistants, Microsoft’s move puts Edge in direct competition with other AI-enabled browsers. As more AI tools become embedded in everyday software, the company expects Copilot to evolve rapidly and suggest next steps to help users pick up where they left off.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Sales of the Ray-Ban Meta smart glasses have more than tripled in the first half of 2025, cementing Meta’s dominance in the growing AI wearables market.
While Apple remains quiet on a possible launch of its own AI glasses, Meta and its partner EssilorLuxottica continue to expand their lead. The eyewear giant revealed a 200% rise in Ray-Ban Meta sales, with second-quarter revenue up by over 7% compared to last year.
Smart glasses still represent a small part of both companies’ revenue, yet expectations are rising fast. In June, the firms announced a new model – Oakley AI performance glasses – which they hope will match the success of the Ray-Ban line.
Francesco Milleri, EssilorLuxottica’s CEO, stated they expect a ‘very fast ramp-up’ of the Oakley Meta model.
Meta’s Ray-Ban glasses have been on the market for nearly two years, but recent updates have added live translation features and visual recognition that allows the glasses to interpret scenes in real time.
A version with an integrated display is rumoured to launch later in 2025, and Meta is also developing a high-end model called Orion.
Apple, meanwhile, appears more focused on mixed reality, with reports of a second-generation Vision Pro and Samsung’s Project Moohan, which may follow a similar route. But in the space of everyday wearable AI, Meta currently stands alone—at least until the competition decides to enter the arena.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has updated its Chrome browser to include AI-generated summaries of online stores, aimed at helping shoppers in the US make more informed buying decisions.
Instead of manually searching through reviews, users can now click an icon next to the web address to see a summary of a shop’s performance across key areas like product quality, pricing, returns, and customer service.
The feature is currently available only in English and is limited to desktop users.
The summaries are generated from a range of trusted review platforms, including Trustpilot, Bazaarvoice, Bizrate Insights, and others. Google says that the tool will offer a more efficient and secure online shopping experience.
It also helps the tech giant better compete with Amazon, which has already rolled out AI tools for product comparisons, fit suggestions, and ratings analysis. The move forms part of Google’s wider push to turn Chrome into a more powerful e-commerce assistant.
The company is also integrating AI tools like the Gemini assistant and developing agentic AI systems that can carry out tasks in the browser on a user’s behalf.
At the same time, Chrome faces fresh competition from AI-first browsers such as Perplexity’s Comet, Opera Neon, and a possible entry from OpenAI.
By adding AI-powered features directly into Chrome, Google hopes to future-proof its browser while strengthening its position in online retail.
As rivals begin to build intelligent browsers from the ground up, Google is reimagining how Chrome can serve users beyond simple search and browsing.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!