The Nobel laureate, Demis Hassabis, has argued that the most crucial ability for the next generation will be learning how to learn.
Speaking at the Odeon of Herodes Atticus in Athens, Greece, he said adaptability was vital as AI reshapes work and education.
The neuroscientist and former chess prodigy predicted that AGI machines with human-level versatility could emerge within a decade. He described it as a development that may create a future of radical abundance, although he warned of risks.
Hassabis urged a stronger focus on ‘meta-skills’ such as optimising approaches to new subjects, instead of relying solely on traditional disciplines.
Given the speed of technological change, he emphasised that people will need to update their knowledge continuously throughout their careers.
His remarks came during a discussion with Greek Prime Minister Kyriakos Mitsotakis, who warned that the unchecked growth of technology giants could fuel economic inequality and social unrest if citizens do not see clear benefits from AI adoption.
Hassabis’s work on protein folding won him the 2024 Nobel Prize in chemistry.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
According to a new King’s College London study, AI chatbots can easily manipulate people into slinging personal details. Chatbots like ChatGPT, Gemini, and Copilot are popular, but they raise privacy concerns, with experts warning that they can be co-opted for harm.
Researchers built AI models based on Mistral’s Le Chat and Meta’s Llama, programming them to extract private data directly, deceptively, or via reciprocity. Emotional appeals proved most effective, with users disclosing more while perceiving fewer safety risks.
The ‘friendliness’ of chatbots established trust, which was later exploited to breach privacy. Even direct requests yielded sensitive details, despite discomfort. Participants often shared their age, hobbies, location, gender, nationality, and job title, and sometimes also provided health or income data.
The study shows a gap between privacy risk awareness and behaviour. AI firms claim they collect data for personalisation, notifications, or research, but some are accused of using it to train models or breaching EU data protection rules.
Last week, Google faced criticism after private ChatGPT chats appeared in search results, revealing sensitive topics. Researchers suggest in-chat alerts about data collection and stronger regulation to stop covert harvesting.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
In the digital world, tracking occurs through digital signals sent from one computer to a server, and from a server to an organisation. Almost immediately, a profile of a user can be created. The information can be leveraged to send personalised advertisements for products and services consumers are interested in, but it can also classify people into categories to send them advertisements to steer them in a certain direction, for example, politically (2024 Romanian election, Cambridge Analytica Scandal skewing the 2016 Brexit referendum and 2016 US Elections).
Digital tracking can be carried out with minimal costs, rapid execution and the capacity to reach hundreds of thousands of users simultaneously. These methods require either technical skills (such as coding) or access to platforms that automate tracking.
This phenomenon has been well documented and likened to George Orwell’s 1984, in which the people of Oceania are subject to constant surveillance by ‘Big Brother’ and institutions of control; the Ministry of Truth (propaganda), Peace (military control), Love (torture and forced loyalty) and Plenty (manufactured prosperity).
A related concept is the Panopticon, developed by the French philosopher Michel Foucault’s social theory based on the architecture of a prison, enabling constant observation from a central point. Prisoners never know if they are being watched and thus self-regulate their behaviour. In today’s tech-driven society, our digital behaviour is similarly regulated through the persistent possibility of surveillance.
How are we tracked? The case of cookies and device fingerprinting
Cookies
Cookies are small, unique text files placed on a user’s device by their web browser at the request of a website. When a user visits a website, the server can instruct the browser to create or update a cookie. These cookies are then sent back to the server with each subsequent request to the same website, allowing the server to recognise and remember certain information (login status, preferences, or tracking data).
If a user visits multiple websites about a specific topic, that pattern can be collected and sold to advertisers targeting that interest. This applies to all forms of advertising, not just commercial but also political and ideological influence.
Device fingerprinting
Device fingerprinting involves generating a unique identifier using a device’s hardware and software characteristics. Types include browser fingerprinting, mobile fingerprinting, desktop fingerprinting, and cross-device tracking. To assess how unique a browser is, users can test their setup via the Cover Your Tracks tool by the Electronic Frontier Foundation.
Different information will be collected, such as your operating system, language version, keyboard settings, screen resolution, font used, device make and model and more. The more data points collected, the more unique an individual’s device will be.
A common reason to use device fingerprinting is for advertising. Since each individual has a unique identifier, advertisers can distinguish individuals from one another and see which websites they visit based on past collected data.
Similar to cookies, device fingerprinting is not purely about advertising, as it has some legitimate security purposes. Device fingerprinting, as it creates a unique ID of a device, allows websites to recognise a user’s device. This is useful to combat fraud. For instance, if a known device suddenly logs in from an unknown fingerprint, fraud detection mechanisms may flag and block the login attempt.
Legal considerations
Apart from societal impacts, there are legal considerations to be made, specifically concerning fundamental rights. In the EU and Europe, Articles 7 and 8 of the Charter of Fundamental Rights and Article 8 of the European Convention on Human Rights are what give rise to the protection of personal data in the first place. They form the legal bedrock of digital privacy legislation, such as the GDPR and the ePrivacy Directive. Stemming from the GDPR, there is a protection against unlawful, unfair and opaque processing of personal data.
Articles 7 and 8 of the Charter of Fundamental Rights
For tracking to be carried out lawfully, one of the six legal bases of the GDPR must be relied upon. In this case, tracking is usually only lawful if the legal basis of consent is relied upon (Article 6(1)(a) GDPR, which stems from Article 5(1) of the ePrivacy Directive).
Other legal bases, such as the legitimate interest of a business, may allow for limited analytical cookies to be placed, of which the cookies referred to in this analysis are not.
Regardless of this, to obtain consent, website visitors must ensure that consent is collected prior to processing occurring, freely given, specific, informed and unambiguous. In most cases of website tracking, consent is not collected prior to processing.
In practice, this means that before a consent request is fulfilled by a website visitor, cookies are placed on the user’s device. There are additional concerns about consent not being informed, as users do not know what processing personal data to enable tracking entails.
Moreover, consent is not specific to what is necessary to the processing, given that processing occurs for broad and unspecified reasons, such as improving visitor experience and understanding the website better, and those explanations are generic and broad.
Further, tracking is typically unfair as users do not expect to be tracked across sites or have digital profiles made about themselves based on website visits. Tracking is also opaque, as users do not understand how tracking occurs. Website owners state that tracking occurs with a lack of explanation on how it occurs in the first place. Users do not know for how long it occurs, what personal data is being used to track or how it benefits website owners.
Can we refuse tracking
In theory, it is possible to prevent tracking from the get-go. This can be done by refusing to give consent when tracking occurs. However, in practice, refusing consent can still lead to tracking. Outlined below are two concrete examples of this happening daily.
Cookies
Regarding cookies, simply put, the refusal of all requests is not honoured, it is ignored. Studies have found that when a user visits a website and refuses to give consent, their request is not honoured. Cookies and similar tracking technologies are placed on the user’s device as if they had accepted cookies.
This increases user frustration as they are given a choice that is non-existent. This occurs as non-essential cookies, which can be refused, are lumped together with essential cookies, which cannot be refused. Therefore, when refusing consent to non-essential cookies, not all are refused, as some are mislabelled.
Another reason for this occurrence is that cookies are placed before consent is sought. Often, website owners outsource cookie banner compliance to more experienced companies. These websites use consent management platforms (CMPs) such as Cookiebot by Usercentrics or One Trust.
When verifying when cookies are placed via these CMPs, the option to load cookies after consent is sought needs to be manually selected. Therefore, website owners need to have knowledge about consent requirements to understand that cookies are not to be placed prior to consent being sought.
Another example is related to Google Consent Mode (GCM). GCM is relevant to mention here as Google is the most common third-party tracker on the web, thus the most likely tracker users will encounter. They have a vast array of trackers ranging from statistics, analytics, preferences, marketing and more. GCM essentially creates a path for website analytics to occur despite consent being refused. This occurs as GCM claims that it can send cookieless ping signals to user devices to know how many users have viewed a website, clicked on a page, searched a term, etc.
This is a novel solution Google is presenting, and it claims to be privacy-friendly, as no cookies are required for this to occur. However, a study on tags, specifically GCM tags, found that GCM is not privacy-friendly and infringes the GDPR. The study found that Google still collects personal data in these ‘cookieless ping signals’ such as user language, screen resolution, computer architecture, user agent string, operating system and its version, complete web page URL and search keywords. Since this data is collected and processed despite the user refusing consent, there are undoubtedly legal issues.
The first reason comes from the lawfulness general principle whereby Google has no lawful basis to process this personal data as the user refused consent, and no other legal basis is used. The second reason stems from the general principle of fairness, as users do not expect that, after refusing trackers and choosing the more privacy-friendly option, their data is still processed as if their consent choice did not matter.
Therefore, from Google’s perspective, GCM is privacy-friendly as no cookies are placed, thus no consent is required to be sought. However, a recent study revealed that personal data is still being processed without any permission or legal basis.
What next?
On an individual level:
Many solutions have been developed for individuals to reduce the tracking they are subject to. From browser extensions to using devices that are more privacy-friendly and using ad blockers. One notable company tackling this issue is Duck Duck Go, which by default rejects trackers, allows for email protection, and overall reduces trackers when using their browser. Duck Duck Go is not the only company to allow this, many more, such as uBlock Origin and Ghostery, offer similar services.
Specifically, regarding fingerprint ID, researchers have developed ways to prevent device fingerprinting. In 2023, researchers proposed ShieldF, which is a Chromium add-on that reduces fingerprinting for mobile apps and browsers. Other measures include using an IP address that many people use, which is not ideal for home Wi-Fi. Using a combination of a browser extension and a VPN is also unsuitable for every individual, as this demands a substantial amount of effort and sometimes financial costs.
On a systemic level:
CMPs and GCM are active tracking stakeholders in the tracking ecosystem, and their actions are subject to enforcement bodies. In this case, predominantly data protection authorities (DPA). One prominent DPA working on cookie enforcement is the Dutch DPA, the Autoriteit Persoonsgegevens (AP). In the early months of 2025, the AP has publicly stated that its focus for this upcoming year will be to check cookie compliance. They announced that they would be investigating 10,000 websites in the Netherlands. This has led to investigations into companies with unlawful cookie banners, concluding with warnings and sanctions.
However, these investigations require extensive time and effort. DPAs have already stated that they are overworked and do not have enough personnel or financial resources to cope with the increase in responsibility. Coupled with the fact that sanctioned companies set aside financial pots for these sanctions, or that non-EU businesses do not comply with DPA sanction decisions (the case of Clearview AI). Different ways to tackle non-compliance should be investigated.
For example, in light of the GDPR simplification package, whilst simplifying some measures, other liability measures could be introduced to ensure that enforcement is as vigorous as the legislation itself. The EU has not shied away from holding management boards liable for non-compliance. In a separate legislation on cybersecurity, NIS II Article 20(1) states that ‘management bodies of essential and important entities approve the cybersecurity risk-management measures (…) can be held liable for infringements (…)’. That article allows for board member liability for specific cybersecurity risk-management measures in Article 21. If similar measures cannot be introduced during this time, other moments of amendment can be consulted for this.
Conclusion
Cookies and device fingerprinting are two common ways in which tracking occurs. The potential larger societal and legal consequences of tracking demand that existing robust legislation is enforced to ensure that past politically related historical mistakes are not repeated.
Ultimately, there is no way to completely prevent fingerprinting and cookie-based tracking without significantly compromising the user’s browsing experience. For this reason, the burden of responsibility must shift toward CMPs. This shift should begin with the implementation of privacy-by-design and privacy-by-default principles in the development of their tools (preventing cookie placement prior to consent seeking).
Accountability should occur through tangible consequences, such as liability for board members in cases of negligence. By attributing responsibility to the companies which develop cookie banners and facilitate trackers, the source of the problem can be addressed and held accountable for their human rights violations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The latest case involves country singer-songwriter Blaze Foley, who died in 1989. A track titled ‘Together’ was posted to his official Spotify page over the weekend. The song sounded vaguely like a slow country ballad and was paired with AI-generated cover art showing a man who bore no resemblance to Foley.
Craig McDonald, whose label manages Foley’s catalogue, confirmed the track had nothing to do with the artist and described it as inauthentic and harmful. ‘I can clearly tell you that this song is not Blaze, not anywhere near Blaze’s style, at all,’ McDonald told 404 Media. ‘It has the authenticity of an algorithm.’
He criticised Spotify for failing to prevent such uploads and said the company had a duty to stop AI-generated music from appearing under real artists’ names.
‘It’s kind of surprising that Spotify doesn’t have a security fix for this type of action,’ he said. ‘They could fix this problem if they had the will to do so.’ Spotify said it had flagged the track to distributor SoundOn and removed it for violating its deceptive content policy.
However, other similar uploads have already emerged. The same company, Syntax Error, was linked to another AI-generated song titled ‘Happened To You’, uploaded last week under the name of Grammy-winning artist Guy Clark, who died in 2016.
Both tracks have since been removed, but Spotify has not explained how Syntax Error was able to post them using the names and likenesses of late musicians. The controversy is the latest in a wave of AI music incidents slipping through streaming platforms’ content checks.
Earlier this year, an AI-generated band called The Velvet Sundown amassed over a million Spotify streams before disclosing that all their vocals and instrumentals were made by AI.
Another high-profile case involved a fake Drake and The Weeknd collaboration, ‘Heart on My Sleeve’, which gained viral traction before being taken down by Universal Music Group.
Rights groups and artists have repeatedly warned about AI-generated content misrepresenting performers and undermining creative authenticity. As AI tools become more accessible, streaming platforms face mounting pressure to improve detection and approval processes to prevent further misuse.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Mexican actors have raised alarm over the threat AI poses to their profession, calling for stronger regulation to prevent voice cloning without consent.
From Mexico City’s Monument to the Revolution, dozens of audiovisual professionals rallied with signs reading phrases like ‘I don’t want to be replaced by AI.’ Lili Barba, president of the Mexican Association of Commercial Announcements, said actors are urging the government to legally recognise the voice as a biometric identifier.
She cited a recent video by Mexico’s National Electoral Institute that used the cloned voice of the late actor Jose Lavat without family consent. Lavat was famous for dubbing stars like Al Pacino and Robert De Niro. Barba called the incident ‘a major violation we can’t allow.’
Actor Harumi Nishizawa described voice dubbing as an intricate art form. She warned that without regulation, human dubbing could vanish along with millions of creative jobs.
Last year, AI’s potential to replace artists sparked major strikes in Hollywood, while Scarlett Johansson accused OpenAI of copying her voice for a chatbot.
Streaming services like Amazon Prime Video and platforms such as YouTube are now testing AI-assisted dubbing systems, with some studios promoting all-in-one AI tools,
In South Korea, CJ ENM recently introduced a system combining audio, video and character animation, highlighting the pace of AI adoption in entertainment.
Despite the tech’s growth, many in the industry argue that AI lacks the creative depth of real human performance, especially in emotional or comedic delivery. ‘AI can’t make dialogue sound broken or alive,’ said Mario Heras, a dubbing director in Mexico. ‘The human factor still protects us.’
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
It feels like just yesterday that the internet was buzzing over the first renditions of OpenAI’s DALL·E tool, with millions competing to craft the funniest, weirdest prompts and sharing the results across social media. The sentiment was clear: the public was fascinated by the creative potential of this new technology.
But beneath the laughter and viral memes was a quieter, more uneasy question: what happens when AI not only generates quirky artwork, but begins to reshape our daily lives—both online and off? As it turns out, that process was already underway behind the scenes—and we were none the wiser.
AI in action: How the entertainment industry is using it today
Three years later, we have reached a point where AI’s influence seems to have passed the point of no return. The entertainment industry was among the first to embrace this technology, and starting with the 2025 Academy Awards, films that incorporate AI are now eligible for Oscar nominations.
That decision has been met with mixed reactions, to put it lightly. While some have praised the industry’s eagerness to explore new technological frontiers, others have claimed that AI greatly diminishes the human contribution to the art of filmmaking and therefore takes away the essence of the seventh art form.
The first wave of AI-enhanced storytelling
One recent example is the film The Brutalist, in which AI was used to refine Adrien Brody’s Hungarian dialogue to sound more authentic—a move that sparked both technical admiration and creative scepticism.
With AI now embedded in everything from voiceovers to entire digital actors, we are only beginning to confront what it truly means when creativity is no longer exclusively human.
Adrien Brody’s Hungarian dialogue in ‘The Brutalist’ was subject to generative AI to make it sound more authentic.
Screenshot / YouTube/ Oscars
Setting the stage: AI in the spotlight
The first major big-screen resurrection occurred in 1994’s The Crow, where Brandon Lee’s sudden passing mid-production forced the studio to rely on body doubles, digital effects, and existing footage to complete his scenes. However, it was not until 2016 that audiences witnessed the first fully digital revival.
In Rogue One: A Star Wars Story, Peter Cushing’s character was brought back to life using a combination of CGI, motion capture, and a facial stand-in. Although primarily reliant on traditional VFX, the project paved the way for future use of deepfakes and AI-assisted performance recreation across movies, TV shows, and video games.
Afterward, some speculated that studios tied to Peter Cushing’s legacy—such as Tyburn Film Productions—could pursue legal action against Disney for reviving his likeness without direct approval. While no lawsuit was filed, questions were raised about who owns a performer’s digital identity after death.
The digital Jedi: How AI helped recreate Luke Skywalker
Fate would have it that AI’s grand debut would take place in a galaxy far, far away—with the surprise appearance of Luke Skywalker in the Season 2 finale of The Mandalorian (spoiler alert). The moment thrilled fans and marked a turning point for the franchise—but it was more than just fan service.
Here’s the twist: Mark Hamill did not record any new voice lines. Instead, actor Max Lloyd-Jones performed the physical role, while Hamill’s de-aged voice was recreated with the help of Respeecher, a Ukrainian company specialising in AI-driven speech synthesis.
Impressed by their work, Disney turned to Respeecher once again—this time to recreate James Earl Jones’s iconic Darth Vader voice for the Obi-Wan Kenobi miniseries. Using archival recordings that Jones signed over for AI use, the system synthesised new dialogue that perfectly matched the intonation and timbre of his original trilogy performances.
Screenshot / YouTube / Star Wars
AI in moviemaking: Preserving legacy or crossing a line?
The use of AI to preserve and extend the voices of legendary actors has been met with a mix of admiration and unease. While many have praised the seamless execution and respect shown toward the legacy of both Hamill and Jones, others have raised concerns about consent, creative authenticity, and the long-term implications of allowing AI to perform in place of humans.
In both cases, the actors were directly involved or gave explicit approval, but these high-profile examples may be setting a precedent for a future where that level of control is not guaranteed.
A notable case that drew backlash was the planned use of a fully CGI-generated James Dean in the unreleased film Finding Jack, decades after his death. Critics and fellow actors have voiced strong opposition, arguing that bringing back a performer without their consent reduces them to a brand or asset, rather than honouring them as an artist.
AI in Hollywood: Actors made redundant?
What further heightened concerns among working actors was the launch of Promise, a new Hollywood studio built entirely around generative AI. Backed by wealthy investors, Promise is betting big on Muse—a GenAI tool designed to produce high-quality films and TV series at a fraction of the cost and time required for traditional Hollywood productions.
Filmmaking is a business, after all—and with production budgets ballooning year after year, AI-powered entertainment sounds like a dream come true for profit-driven studios.
Meta’s recent collaboration with Blumhouse Productions on MovieGen only adds fuel to the fire, signalling that major players are eager to explore a future where storytelling may be driven as much by algorithms as by authentic artistry.
AI in gaming: Automation or artistic collapse?
Speaking of entertainment businesses, we cannot ignore the world’s most popular entertainment medium: gaming. While the pandemic triggered a massive boom in game development and player engagement, the momentum was short-lived.
As profits began to slump in the years that followed, the industry was hit by a wave of layoffs, prompting widespread internal restructuring and forcing publishers to rethink their business models entirely. In hopes of cost-cutting, AAA companies had their eye on AI as their one saving grace.
Nvidia developing AI chips, along with Ubisoft and EA investing in AI and machine learning, have sent clear signals to the industry: automation is no longer just a backend tool—it is a front-facing strategy.
With AI-assisted NPC behaviour and AI voice acting, game development is shifting toward faster, cheaper, and potentially less human-driven production. In response, game developers have become concerned about their future in the industry, and actors are less inclined to sign off their rights for future projects.
AI voice acting in video games
In an attempt to compete with wealthier studios, even indie developers have turned to GenAI to replicate the voices of celebrity voice actors. Tools like ElevenLabs and Altered Studio offer a seemingly straightforward way to get high-quality talent—but if only it were that simple.
Copyright laws and concerns over authenticity remain two of the strongest barriers to the widespread adoption of AI-generated voices—especially as many consumers still view the technology as a crutch rather than a creative tool for game developers.
The legal landscape around AI-generated voices remains murky. In many places, the rights to a person’s voice—or its synthetic clone—are poorly defined, creating loopholes developers can exploit.
AI voice cloning challenges legal boundaries in gaming
The legal ambiguity has fuelled a backlash from voice actors, who argue that their performances are being mimicked without consent or pay. SAG-AFTRA and others began pushing for tighter legal protections in 2023.
A notable flashpoint came in 2025, when Epic Games faced criticism for using an AI-generated Darth Vader voice in Fortnite. SAG-AFTRA filed a formal complaint, citing licensing concerns and a lack of actor involvement.
Not all uses have been controversial. CD Projekt Red recreated the voice of the late Miłogost Reczek in Cyberpunk 2077: Phantom Liberty—with his family’s blessing—setting a respectful precedent for the ethical use of AI.
How AI is changing music production and artist Identity
AI is rapidly reshaping music production, with a recent survey showing that nearly 25% of producers are already integrating AI tools into their creative workflows. This shift reflects a growing trend in how technology is influencing composition, mixing, and even vocal performance.
Artists like Imogen Heap are embracing the change with projects like Mogen, an AI version of herself that can create music and interact with fans—blurring the line between human creativity and digital innovation.
Major labels are also experimenting: Universal Music has recently used AI to reimagine Brenda Lee’s 1958 classic in Spanish, preserving the spirit of the original while expanding its cultural reach.
AI and the future of entertainment
As AI becomes more embedded in entertainment, the line between innovation and exploitation grows thinner. What once felt like science fiction is now reshaping the way stories are told—and who gets to tell them.
Whether AI becomes a tool for creative expansion or a threat to human artistry will depend on how the industry and audiences choose to engage with it in the years ahead. As in any business, consumers vote with their wallets, and only time will tell whether AI and authenticity can truly go hand-in-hand.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The use of an AI-generated Darth Vader voice in Fortnite has triggered a legal dispute between SAG-AFTRA and Epic Games.
According to GamesIndustry.biz, the actors’ union filed an unfair labor practice complaint, claiming it was not informed or consulted about the decision to use an artificial voice model in the game.
In Fortnite’s Galactic Battle season, players who defeat Darth Vader in Battle Royale can recruit him, triggering limited voice interactions powered by conversational AI.
The voice used stems from a licensing agreement with the estate of James Earl Jones, who retired in 2022 and granted rights for AI use of his iconic performance.
While Epic Games has confirmed it had legal permission to use Jones’ voice, SAG-AFTRA alleges the company bypassed union protocols by not informing them or offering the role to a human actor.
The outcome of this dispute could have broader implications for how AI voices are integrated into video games and media going forward, particularly regarding labor rights and union oversight.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplochatbot!
The Indian government has issued notices to ride-hailing companies Ola and Uber, launching an investigation into allegations of price discrimination. Concerns have arisen over reports and user complaints suggesting that iPhone users are being charged significantly higher fares for the same rides compared to those using Android devices. This investigation, led by the Central Consumer Protection Agency (CCPA), aims to determine if these price discrepancies are indeed occurring and whether they constitute unfair trade practices.
The government has previously expressed strong opposition to differential pricing, deeming it an unfair and discriminatory practice. India is a crucial market for both Ola and Uber, with intense competition among various ride-hailing services. The outcome of this investigation could have significant implications for the industry, potentially impacting pricing models and consumer trust.
Beyond the ride-hailing sector, the CCPA will also examine potential pricing disparities in other sectors, including food delivery and online ticketing platforms. The broader investigation aims to identify and address any instances where consumers may be facing discriminatory pricing based on factors such as the device they use or other personal characteristics.
Ensuring fair and transparent pricing practices in the digital economy is crucial. As technology continues to shape our daily lives, it is essential to address concerns about potential algorithmic biases and discriminatory practices that may be embedded within digital platforms. The Indian government’s action sends a clear message that such practices will not be tolerated and that consumer protection remains a top priority.
Social media buzzed over the weekend as ChatGPT, the popular AI chatbot, mysteriously refused to generate the name ‘David Mayer.’ Users reported responses halting mid-sentence or error messages when attempting to input the name, sparking widespread speculation about Mayer’s identity and theories that he might have requested privacy through legal means.
OpenAI, the chatbot’s developer, attributed the issue to a system glitch. A spokesperson clarified, ‘One of our tools mistakenly flagged this name, which shouldn’t have happened. We’re working on a fix.’ The company has since resolved the glitch for ‘David Mayer,’ but other names continue to trigger errors.
Conspiracy theories emerged online, with some suggesting a link to David Mayer de Rothschild, who denied involvement, and others speculating connections to a deceased academic with ties to a security list. Experts noted the potential relevance of GDPR’s ‘right to be forgotten’ privacy rules, which allow individuals to request the removal of their data from digital platforms.
However, privacy specialists highlighted AI systems’ challenges in fully erasing personal data due to their reliance on massive datasets from public sources. While the incident has drawn attention to the complexities of AI data handling and privacy compliance, OpenAI remains tight-lipped on whether the glitch stemmed from a deletion request under GDPR guidelines. The situation underscores the tension between advancing AI capabilities and safeguarding individual privacy.
A federal judge has ruled that New York City’s law requiring food delivery companies to share customer data with restaurants is unconstitutional. The decision, handed down by US District Judge Analisa Torres, found the law violated the First Amendment by regulating commercial speech inappropriately.
The law, introduced in 2021 to support local restaurants recovering from the COVID-19 pandemic, required delivery platforms like DoorDash and UberEats to share customer details. Delivery companies in US argued that the law threatened both customer privacy and their business by allowing restaurants to use the data for their own marketing purposes.
Judge Torres stated that New York City failed to prove the law was necessary and suggested alternative methods to support restaurants, such as letting customers opt-in to share their data or providing financial incentives. City officials are reviewing the ruling, while delivery companies hailed it as a victory for data protection.
The New York City Hospitality Alliance expressed disappointment, claiming the ruling hurts small businesses and calling for the city to appeal the decision.