French video game publisher Ubisoft is facing a formal privacy complaint from European advocacy group noyb for requiring players to stay online even when enjoying single-player games.
The complaint, lodged with Austria’s data protection authority, accuses Ubisoft of violating EU privacy laws by collecting personal data without consent.
Noyb argues that Ubisoft makes players connect to the internet and log into a Ubisoft account unnecessarily, even when they are not interacting with other users.
Instead of limiting data collection to essential functions, noyb claims the company contacts external servers, including Google and Amazon, over 150 times during gameplay. This, they say, reveals a broader surveillance practice hidden beneath the surface.
Ubisoft, known for blockbuster titles like Assassin’s Creed and Far Cry, has not yet explained why such data collection is needed for offline play.
The complainant who examined the traffic found that Ubisoft gathers login and browsing data and uses third-party tools, practices that, under GDPR rules, require explicit user permission. Instead of offering transparency, Ubisoft reportedly failed to justify these invasive practices.
Noyb is calling on regulators to demand deletion of all data collected without a clear legal basis and to fine Ubisoft €92 million. They argue that consumers, who already pay steep prices for video games, should not have to sacrifice their privacy in the process.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A recent drop in reported ransomware attacks might seem encouraging, yet experts warn this is likely misleading. Figures from the NCC Group show a 32% decline in March 2025 compared to the previous month, totalling 600 incidents.
However, this dip is attributed to unusually large-scale attacks in earlier months, rather than an actual reduction in cybercrime. In fact, incidents were up 46% compared with March last year, highlighting the continued escalation in threat activity.
Rather than fading, ransomware groups are becoming more sophisticated. Babuk 2.0 emerged as the most active group in March, though doubts surround its legitimacy. Security researchers believe it may be recycling leaked data from previous breaches, aiming to trick victims instead of launching new attacks.
A tactic like this mirrors behaviours seen after law enforcement disrupted other major ransomware networks, such as LockBit in 2024.
Industrials were the hardest hit, followed by consumer-focused sectors, while North America bore the brunt of geographic targeting.
With nearly half of all recorded attacks occurring in the region, analysts expect North America, especially Canada, to remain a prime target amid rising political tensions and cyber vulnerability.
Meanwhile, cybercriminals are turning to malvertising, malicious code hidden in online advertisements, as a stealthier route of attack. This tactic has gained traction through the misuse of trusted platforms like GitHub and Dropbox, and is increasingly being enhanced with generative AI tools.
Instead of relying solely on technical expertise, attackers now use AI to craft more convincing and complex threats. As these strategies grow more advanced, experts urge organisations to stay alert and prioritise threat intelligence and collaboration to navigate this volatile cyber landscape.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Jamaican Ministry of Education is testing AI tools in schools to assist teachers with marking and administrative duties.
Portfolio Minister Senator Dana Morris Dixon announced this during the Jamaica Teachers’ Association (JTA) Education Conference 2025, emphasising that AI would allow teachers to focus more on interacting with students, while AI handles routine tasks like grading.
The Ministry is also preparing to launch the Jamaica Learning Assistant, an AI-powered tool that personalises learning to fit individual students’ preferences, such as stories, humour, or quizzes.
Morris Dixon highlighted that AI is not meant to replace teachers, but to support them in delivering more effective lessons. The technology will allow students to review lessons, explore topics in more depth, and reinforce their understanding outside the classroom.
Looking ahead, the Government plans to open Jamaica’s first state-of-the-art AI lab later this year. The facility will offer a space where both students and teachers can develop technological solutions tailored for schools.
Additionally, the Ministry is distributing over 15,000 laptops, 600 smart boards, and 25,000 vouchers for teachers to subsidise the purchase of personal laptops to further integrate technology into the education system.
JTA President Mark Smith acknowledged the transformative potential of AI, calling it one of the most significant technological breakthroughs in history.
He urged educators to embrace this new paradigm and collaborate with the Ministry and the private sector to advance digital learning initiatives across the island.
The conference, held under the theme ‘Innovations in Education Technology: The Imperative of Change,’ reflects the ongoing push towards modernising education in Jamaica.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Academy of Motion Picture Arts and Sciences has officially made films that incorporate AI eligible for Oscars, reflecting AI’s growing influence in cinema. Updated rules confirm that the use of generative AI or similar tools will neither help nor harm a film’s chances of nomination.
These guidelines, shaped with input from the Academy’s Science and Technology Council, aim to keep human creativity at the forefront, despite the increasing presence of digital tools in production.
Recent Oscar-winning films have already embraced AI. Adrien Brody’s performance in The Brutalist was enhanced using AI to refine his Hungarian accent, while Emilia Perez, a musical that claimed an award, used voice-cloning technology to support its cast.
Such tools can convincingly replicate voices and visual styles, making them attractive to filmmakers instead of relying solely on traditional methods, but not without raising industry-wide concerns.
The 2023 Hollywood strikes highlighted the tension between artistic control and automation. Writers and actors protested the threat posed by AI to their livelihoods, leading to new agreements that limit the use of AI-generated content and protect individuals’ likenesses.
Actress Susan Sarandon voiced fears about unauthorised use of her image, and Scarlett Johansson echoed concerns about digital impersonation.
Despite some safeguards, many in the industry remain wary. Animators argue that AI lacks the emotional nuance needed for truly compelling storytelling, and Rokit Flix’s co-founder Jonathan Kendrick warned that AI might help draft scenes, but can’t deliver the depth required for an Oscar-worthy film.
Alongside the AI rules, the Academy also introduced a new voting requirement. Members must now view every nominated film in a category before casting their final vote, to encourage fairer decisions in this shifting creative environment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta has announced it is ramping up efforts to protect teenagers on Instagram by deploying AI to detect users who may have lied about their age. The technology will automatically place suspected underage users into Teen Accounts, even if their profiles state they are adults.
These special accounts come with stricter safety settings designed for users under 16. Those who believe they’ve been misclassified will have the option to adjust their settings manually.
Instead of relying solely on self-reported birthdates, Meta is using its AI to analyse behaviour and signals that suggest a user might be younger than claimed.
While the company has used this technology to estimate age ranges before, it is now applying it more aggressively to catch teens who attempt to bypass the platform’s safeguards. The tech giant insists it’s working to ensure the accuracy of these classifications to prevent mistakes.
Alongside this new AI tool, Meta will also begin sending notifications to parents about their children’s Instagram settings.
These alerts, which are sent only to parents who have Instagram accounts of their own, aim to encourage open conversations at home about the importance of honest age representation online.
Teen Accounts were first introduced last year and are designed to limit access to harmful content, reduce contact from strangers, and promote healthier screen time habits.
Instead of granting unrestricted access, these accounts are private by default, block unsolicited messages, and remind teens to take breaks after prolonged scrolling.
Meta says the goal is to adapt to the digital age and partner with parents to make Instagram a safer space for young users.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
TikTok is trialling a new feature called Footnotes in the United States, allowing users to add context to videos that may be misleading. The move mirrors the Community Notes system used by X, though TikTok will continue its own fact-checking programme in parallel.
Eligible adult users in the United States can apply to contribute Footnotes, and they will also be able to rate the helpfulness of others’ contributions.
Footnotes considered useful will appear publicly on TikTok, with wider users then able to vote on their value. The platform’s head of operations, Adam Presser, said the feature is designed to help users better understand complex topics, ongoing events, or content involving potentially misleading statistics.
The initiative builds on TikTok’s existing tools, including content labels, search banners, and partnerships with third-party fact-checkers such as AFP.
The announcement comes as TikTok’s parent company, ByteDance, continues negotiations with the US government to avoid a potential ban.
Talks over a sale have reportedly stalled amid rising tensions and new tariffs between Washington and Beijing.
While other tech giants such as Meta have scaled back fact-checking in favour of community-based moderation, TikTok is taking a combined approach to ensure greater content accuracy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Lucy Powell, the UK government minister and Leader of the House of Commons, had her X account hacked on Tuesday morning to promote a fake cryptocurrency named ‘House of Commons Coin’ or $HCC.
The now-deleted posts claimed it was a ‘community-driven digital currency’ and featured the official House of Commons logo, misleading her nearly 70,000 followers. Her office confirmed the hack and said steps were taken quickly to remove the scam posts and secure the account.
The incident mirrors a growing trend where cyber criminals hijack high-profile accounts to advertise bogus crypto tokens. Instead of developing legitimate coins, fraudsters use phishing emails or leaked credentials to gain control, then post about hastily launched schemes designed to profit from users’ trust.
These coins are often promoted as community initiatives but vanish as soon as the creators cash out, a method known as ‘pump and dump’. In this case, analysts say only 34 transactions occurred, with a profit of just £225.
The UK Parliament stressed that cyber security is taken seriously and that MPs are advised on how to protect their accounts. Action Fraud reports over 35,000 incidents of hacked social or email accounts this year, urging users to adopt two-step verification and strong, unique passwords.
BBC journalist Nick Robinson experienced a similar hack earlier in the year, after falling for a fake message that led to posts promoting a coin called ‘$Today’ instead of anything legitimate.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google DeepMind has developed a groundbreaking AI model capable of interpreting and generating dolphin vocalisations.
Named DolphinGemma, the model was created in collaboration with researchers from Georgia Tech and the Wild Dolphin Project, a nonprofit organisation known for its extensive studies on Atlantic spotted dolphins.
Using an audio-in, audio-out architecture, the AI DolphinGemma analyses sequences of natural dolphin sounds to detect patterns and structures, ultimately predicting the most likely sounds that follow.
The approach is similar to how large language models predict the next word in a sentence. It was trained using a vast acoustic database collected by the Wild Dolphin Project, ensuring accuracy in modelling natural dolphin communication.
Lightweight and efficient, DolphinGemma is designed to run on smartphones, making it accessible for field researchers and conservationists.
Google DeepMind’s blog noted that the model could mark a major advance in understanding dolphin behaviour, potentially paving the way for more meaningful interactions between humans and marine mammals.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
From GPT-4 to 4.5: What has changed and why it matters
In March 2024, OpenAI released GPT-4.5, the latest iteration in its series of large language models (LLMs), pushing the boundaries of what machines can do with language understanding and generation. Building on the strengths of GPT-4, its successor, GPT-4.5, demonstrates improved reasoning capabilities, a more nuanced understanding of context, and smoother, more human-like interactions.
What sets GPT-4.5 apart from its predecessors is that it showcases refined alignment techniques, better memory over longer conversations, and increased control over tone, persona, and factual accuracy. Its ability to maintain coherent, emotionally resonant exchanges over extended dialogue marks a turning point in human-AI communication. These improvements are not just technical — they significantly affect the way we work, communicate, and relate to intelligent systems.
The increasing ability of GPT-4.5 to mimic human behaviour has raised a key question: Can it really fool us into thinking it is one of us? That question has recently been answered — and it has everything to do with the Turing Test.
The Turing Test: Origins, purpose, and modern relevance
In 1950, British mathematician and computer scientist Alan Turing posed a provocative question: ‘Can machines think?’ In his seminal paper ‘Computing Machinery and Intelligence,’ he proposed what would later become known as the Turing Test — a practical way of evaluating a machine’s ability to exhibit intelligent behaviour indistinguishable from that of a human.
In its simplest form, if a human evaluator cannot reliably distinguish between a human’s and a machine’s responses during a conversation, the machine is said to have passed the test. For decades, the Turing Test remained more of a philosophical benchmark than a practical one.
Early chatbots like ELIZA in the 1960s created the illusion of intelligence, but their scripted and shallow interactions fell far short of genuine human-like communication. Many researchers have questioned the test’s relevance as AI progressed, arguing that mimicking conversation is not the same as true understanding or consciousness.
Despite these criticisms, the Turing Test has endured — not as a definitive measure of machine intelligence, but rather as a cultural milestone and public barometer of AI progress. Today, the test has regained prominence with the emergence of models like GPT-4.5, which can hold complex, context-aware, emotionally intelligent conversations. What once seemed like a distant hypothetical is now an active, measurable challenge that GPT-4.5 has, by many accounts, overcome.
How GPT-4.5 fooled the judges: Inside the Turing Test study
In early 2025, a groundbreaking study conducted by researchers at the University of California, San Diego, provided the most substantial evidence yet that an AI could pass the Turing Test. In a controlled experiment involving over 500 participants, multiple conversational agents—including GPT-4.5, Meta’s LLaMa-3.1, and the classic chatbot ELIZA—were evaluated in blind text-based conversations. The participants were tasked with identifying whether they spoke to a human or a machine.
The results were astonishing: GPT-4.5 was judged to be human in 54% to 73% of interactions, depending on the scenario, surpassing the baseline for passing the Turing Test. In some cases, it outperformed actual human participants—who were correctly identified as human only 67% of the time.
That experiment marked the first time a contemporary AI model convincingly passed the Turing Test under rigorous scientific conditions. The study not only demonstrated the model’s technical capabilities—it also raised philosophical and ethical questions.
What does it mean for a machine to be ‘indistinguishable’ from a human? And more importantly, how should society respond to a world where AI can convincingly impersonate us?
Measuring up: GPT-4.5 vs LLaMa-3.1 and ELIZA
While GPT-4.5’s performance in the Turing Test has garnered much attention, its comparison with other models puts things into a clearer perspective. Meta’s LLaMa-3.1, a powerful and widely respected open-source model, also participated in the study.
It was identified as human in approximately 56% of interactions — a strong showing, although it fell just short of the commonly accepted benchmark to define a Turing Test pass. The result highlights how subtle conversational nuance and coherence differences can significantly influence perception.
The study also revisited ELIZA, the pioneering chatbot from the 1960s designed to mimic a psychotherapist. While historically significant, ELIZA’s simplistic, rule-based structure resulted in it being identified as non-human in most cases — around 77%. That stark contrast with modern models demonstrates how far natural language processing has progressed over the past six decades.
The comparative results underscore an important point: success in human-AI interaction today depends on language generation and the ability to adapt the tone, context, and emotional resonance. GPT-4.5’s edge seems to come not from mere fluency but from its ability to emulate the subtle cues of human reasoning and expression — a quality that left many test participants second-guessing whether they were even talking to a machine.
The power of persona: How character shaped perception
One of the most intriguing aspects of the UC San Diego study was how assigning specific personas to AI models significantly influenced participants’ perceptions. When GPT-4.5 was framed as an introverted, geeky 19-year-old college student, it consistently scored higher in being perceived as human than when it had no defined personality.
The seemingly small narrative detail was a powerful psychological cue that shaped how people interpreted its responses. The use of persona added a layer of realism to the conversation.
Slight awkwardness, informal phrasing, or quirky responses were not seen as flaws — they were consistent with the character. Participants were more likely to forgive or overlook certain imperfections if those quirks aligned with the model’s ‘personality’.
That finding reveals how intertwined identity and believability are in human communication, even when the identity is entirely artificial. The strategy also echoes something long known in storytelling and branding: people respond to characters, not just content.
In the context of AI, persona functions as a kind of narrative camouflage — not necessarily to deceive, but to disarm. It helps bridge the uncanny valley by offering users a familiar social framework. And as AI continues to evolve, it is clear that shaping how a model is perceived may be just as important as what the model is actually saying.
Limitations of the Turing Test: Beyond the illusion of intelligence
While passing the Turing Test has long been viewed as a milestone in AI, many experts argue that it is not the definitive measure of machine intelligence. The test focuses on imitation — whether an AI can appear human in conversation — rather than on genuine understanding, reasoning, or consciousness. In that sense, it is more about performance than true cognitive capability.
Critics point out that large language models like GPT-4.5 do not ‘understand’ language in the human sense – they generate text by predicting the most statistically probable next word based on patterns in massive datasets. That allows them to generate impressively coherent responses, but it does not equate to comprehension, self-awareness, or independent thought.
No matter how convincing, the illusion of intelligence is still an illusion — and mistaking it for something more can lead to misplaced trust or overreliance. Despite its symbolic power, the Turing Test was never meant to be the final word on AI.
As AI systems grow increasingly sophisticated, new benchmarks are needed — ones that assess linguistic mimicry, reasoning, ethical decision-making, and robustness in real-world environments. Passing the Turing Test may grab headlines, but the real test of intelligence lies far beyond the ability to talk like us.
Wider implications: Rethinking the role of AI in society
GPT-4.5’s success in the Turing Test does not just mark a technical achievement — it forces us to confront deeper societal questions. If AI can convincingly pass as a human in open conversation, what does that mean for trust, communication, and authenticity in our digital lives?
From customer service bots to AI-generated news anchors, the line between human and machine is blurring — and the implications are far from purely academic. These developments are challenging existing norms in areas such as journalism, education, healthcare, and even online dating.
How do we ensure transparency when AI is involved? Should AI be required to disclose its identity in every interaction? And how do we guard against malicious uses — such as deepfake conversations or synthetic personas designed to manipulate, mislead, or exploit?
On a broader level, the emergence of human-sounding AI invites a rethinking of agency and responsibility. If a machine can persuade, sympathise, or influence like a person — who is accountable when things go wrong?
As AI becomes more integrated into the human experience, society must evolve its frameworks not only for regulation and ethics but also for cultural adaptation. GPT-4.5 may have passed the Turing Test, but the test for us, as a society, is just beginning.
What comes next: Human-machine dialogue in the post-Turing era
With GPT-4.5 crossing the Turing threshold, we are no longer asking whether machines can talk like us — we are now asking what that means for how we speak, think, and relate to machines. That moment represents a paradigm shift: from testing the machine’s ability to imitate humans to understanding how humans will adapt to coexist with machines that no longer feel entirely artificial.
Future AI models will likely push this boundary even further — engaging in conversations that are not only coherent but also deeply contextual, emotionally attuned, and morally responsive. The bar for what feels ‘human’ in digital interaction is rising rapidly, and with it comes the need for new social norms, protocols, and perhaps even new literacies.
We will need to learn not only how to talk to machines but how to live with them — as collaborators, counterparts, and, in some cases, as reflections of ourselves. In the post-Turing era, the test is no longer whether machines can fool us — it is whether we can maintain clarity, responsibility, and humanity in a world where the artificial feels increasingly real.
GPT-4.5 may have passed a historic milestone, but the real story is just beginning — not one of machines becoming human, but of humans redefining what it means to be ourselves in dialogue with them.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Crosswalk buttons in several Californian cities have been hacked to play AI-generated voices impersonating tech moguls Elon Musk and Mark Zuckerberg, delivering bizarre and satirical messages to pedestrians.
The spoof messages, which mock the CEOs with lines like ‘Can we be friends?’ and ‘Cooking our grandparents’ brains with AI slop,’ have been heard in Palo Alto, Redwood City, and Menlo Park.
US Palo Alto officials confirmed that 12 intersections were affected and the audio systems have since been disabled.
While the crosswalk signals themselves remain operational, authorities are investigating how the hack was carried out. Similar issues are being addressed in nearby cities, with local governments moving quickly to secure the compromised systems.
The prank, which uses AI voice cloning, appears to layer these spoofed messages on top of the usual accessibility features rather than replacing them entirely.
Though clearly comedic in intent, the incident has raised concerns about the growing ease with which public systems can be manipulated using generative technologies.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!