Google creates AI to decode dolphin sounds

Google DeepMind has developed a groundbreaking AI model capable of interpreting and generating dolphin vocalisations.

Named DolphinGemma, the model was created in collaboration with researchers from Georgia Tech and the Wild Dolphin Project, a nonprofit organisation known for its extensive studies on Atlantic spotted dolphins.

Using an audio-in, audio-out architecture, the AI DolphinGemma analyses sequences of natural dolphin sounds to detect patterns and structures, ultimately predicting the most likely sounds that follow.

The approach is similar to how large language models predict the next word in a sentence. It was trained using a vast acoustic database collected by the Wild Dolphin Project, ensuring accuracy in modelling natural dolphin communication.

Lightweight and efficient, DolphinGemma is designed to run on smartphones, making it accessible for field researchers and conservationists.

Google DeepMind’s blog noted that the model could mark a major advance in understanding dolphin behaviour, potentially paving the way for more meaningful interactions between humans and marine mammals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Beyond the imitation game: GPT-4.5, the Turing Test, and what comes next

From GPT-4 to 4.5: What has changed and why it matters

In March 2024, OpenAI released GPT-4.5, the latest iteration in its series of large language models (LLMs), pushing the boundaries of what machines can do with language understanding and generation. Building on the strengths of GPT-4, its successor, GPT-4.5, demonstrates improved reasoning capabilities, a more nuanced understanding of context, and smoother, more human-like interactions.

What sets GPT-4.5 apart from its predecessors is that it showcases refined alignment techniques, better memory over longer conversations, and increased control over tone, persona, and factual accuracy. Its ability to maintain coherent, emotionally resonant exchanges over extended dialogue marks a turning point in human-AI communication. These improvements are not just technical — they significantly affect the way we work, communicate, and relate to intelligent systems.

The increasing ability of GPT-4.5 to mimic human behaviour has raised a key question: Can it really fool us into thinking it is one of us? That question has recently been answered — and it has everything to do with the Turing Test.

The Turing Test: Origins, purpose, and modern relevance

In 1950, British mathematician and computer scientist Alan Turing posed a provocative question: ‘Can machines think?’ In his seminal paper ‘Computing Machinery and Intelligence,’ he proposed what would later become known as the Turing Test — a practical way of evaluating a machine’s ability to exhibit intelligent behaviour indistinguishable from that of a human.

In its simplest form, if a human evaluator cannot reliably distinguish between a human’s and a machine’s responses during a conversation, the machine is said to have passed the test. For decades, the Turing Test remained more of a philosophical benchmark than a practical one.

Early chatbots like ELIZA in the 1960s created the illusion of intelligence, but their scripted and shallow interactions fell far short of genuine human-like communication. Many researchers have questioned the test’s relevance as AI progressed, arguing that mimicking conversation is not the same as true understanding or consciousness.

Despite these criticisms, the Turing Test has endured — not as a definitive measure of machine intelligence, but rather as a cultural milestone and public barometer of AI progress. Today, the test has regained prominence with the emergence of models like GPT-4.5, which can hold complex, context-aware, emotionally intelligent conversations. What once seemed like a distant hypothetical is now an active, measurable challenge that GPT-4.5 has, by many accounts, overcome.

How GPT-4.5 fooled the judges: Inside the Turing Test study

In early 2025, a groundbreaking study conducted by researchers at the University of California, San Diego, provided the most substantial evidence yet that an AI could pass the Turing Test. In a controlled experiment involving over 500 participants, multiple conversational agents—including GPT-4.5, Meta’s LLaMa-3.1, and the classic chatbot ELIZA—were evaluated in blind text-based conversations. The participants were tasked with identifying whether they spoke to a human or a machine.

The results were astonishing: GPT-4.5 was judged to be human in 54% to 73% of interactions, depending on the scenario, surpassing the baseline for passing the Turing Test. In some cases, it outperformed actual human participants—who were correctly identified as human only 67% of the time.

That experiment marked the first time a contemporary AI model convincingly passed the Turing Test under rigorous scientific conditions. The study not only demonstrated the model’s technical capabilities—it also raised philosophical and ethical questions.

What does it mean for a machine to be ‘indistinguishable’ from a human? And more importantly, how should society respond to a world where AI can convincingly impersonate us?

Measuring up: GPT-4.5 vs LLaMa-3.1 and ELIZA

While GPT-4.5’s performance in the Turing Test has garnered much attention, its comparison with other models puts things into a clearer perspective. Meta’s LLaMa-3.1, a powerful and widely respected open-source model, also participated in the study.

It was identified as human in approximately 56% of interactions — a strong showing, although it fell just short of the commonly accepted benchmark to define a Turing Test pass. The result highlights how subtle conversational nuance and coherence differences can significantly influence perception.

The study also revisited ELIZA, the pioneering chatbot from the 1960s designed to mimic a psychotherapist. While historically significant, ELIZA’s simplistic, rule-based structure resulted in it being identified as non-human in most cases — around 77%. That stark contrast with modern models demonstrates how far natural language processing has progressed over the past six decades.

The comparative results underscore an important point: success in human-AI interaction today depends on language generation and the ability to adapt the tone, context, and emotional resonance. GPT-4.5’s edge seems to come not from mere fluency but from its ability to emulate the subtle cues of human reasoning and expression — a quality that left many test participants second-guessing whether they were even talking to a machine.

The power of persona: How character shaped perception

One of the most intriguing aspects of the UC San Diego study was how assigning specific personas to AI models significantly influenced participants’ perceptions. When GPT-4.5 was framed as an introverted, geeky 19-year-old college student, it consistently scored higher in being perceived as human than when it had no defined personality.

The seemingly small narrative detail was a powerful psychological cue that shaped how people interpreted its responses. The use of persona added a layer of realism to the conversation.

Slight awkwardness, informal phrasing, or quirky responses were not seen as flaws — they were consistent with the character. Participants were more likely to forgive or overlook certain imperfections if those quirks aligned with the model’s ‘personality’.

That finding reveals how intertwined identity and believability are in human communication, even when the identity is entirely artificial. The strategy also echoes something long known in storytelling and branding: people respond to characters, not just content.

In the context of AI, persona functions as a kind of narrative camouflage — not necessarily to deceive, but to disarm. It helps bridge the uncanny valley by offering users a familiar social framework. And as AI continues to evolve, it is clear that shaping how a model is perceived may be just as important as what the model is actually saying.

Limitations of the Turing Test: Beyond the illusion of intelligence

While passing the Turing Test has long been viewed as a milestone in AI, many experts argue that it is not the definitive measure of machine intelligence. The test focuses on imitation — whether an AI can appear human in conversation — rather than on genuine understanding, reasoning, or consciousness. In that sense, it is more about performance than true cognitive capability.

Critics point out that large language models like GPT-4.5 do not ‘understand’ language in the human sense – they generate text by predicting the most statistically probable next word based on patterns in massive datasets. That allows them to generate impressively coherent responses, but it does not equate to comprehension, self-awareness, or independent thought.

No matter how convincing, the illusion of intelligence is still an illusion — and mistaking it for something more can lead to misplaced trust or overreliance. Despite its symbolic power, the Turing Test was never meant to be the final word on AI.

As AI systems grow increasingly sophisticated, new benchmarks are needed — ones that assess linguistic mimicry, reasoning, ethical decision-making, and robustness in real-world environments. Passing the Turing Test may grab headlines, but the real test of intelligence lies far beyond the ability to talk like us.

Wider implications: Rethinking the role of AI in society

GPT-4.5’s success in the Turing Test does not just mark a technical achievement — it forces us to confront deeper societal questions. If AI can convincingly pass as a human in open conversation, what does that mean for trust, communication, and authenticity in our digital lives?

From customer service bots to AI-generated news anchors, the line between human and machine is blurring — and the implications are far from purely academic. These developments are challenging existing norms in areas such as journalism, education, healthcare, and even online dating.

How do we ensure transparency when AI is involved? Should AI be required to disclose its identity in every interaction? And how do we guard against malicious uses — such as deepfake conversations or synthetic personas designed to manipulate, mislead, or exploit?

 Body Part, Hand, Person, Finger, Smoke Pipe

On a broader level, the emergence of human-sounding AI invites a rethinking of agency and responsibility. If a machine can persuade, sympathise, or influence like a person — who is accountable when things go wrong?

As AI becomes more integrated into the human experience, society must evolve its frameworks not only for regulation and ethics but also for cultural adaptation. GPT-4.5 may have passed the Turing Test, but the test for us, as a society, is just beginning.

What comes next: Human-machine dialogue in the post-Turing era

With GPT-4.5 crossing the Turing threshold, we are no longer asking whether machines can talk like us — we are now asking what that means for how we speak, think, and relate to machines. That moment represents a paradigm shift: from testing the machine’s ability to imitate humans to understanding how humans will adapt to coexist with machines that no longer feel entirely artificial.

Future AI models will likely push this boundary even further — engaging in conversations that are not only coherent but also deeply contextual, emotionally attuned, and morally responsive. The bar for what feels ‘human’ in digital interaction is rising rapidly, and with it comes the need for new social norms, protocols, and perhaps even new literacies.

We will need to learn not only how to talk to machines but how to live with them — as collaborators, counterparts, and, in some cases, as reflections of ourselves. In the post-Turing era, the test is no longer whether machines can fool us — it is whether we can maintain clarity, responsibility, and humanity in a world where the artificial feels increasingly real.

GPT-4.5 may have passed a historic milestone, but the real story is just beginning — not one of machines becoming human, but of humans redefining what it means to be ourselves in dialogue with them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI voice hacks put fake Musk and Zuckerberg at crosswalks

Crosswalk buttons in several Californian cities have been hacked to play AI-generated voices impersonating tech moguls Elon Musk and Mark Zuckerberg, delivering bizarre and satirical messages to pedestrians.

The spoof messages, which mock the CEOs with lines like ‘Can we be friends?’ and ‘Cooking our grandparents’ brains with AI slop,’ have been heard in Palo Alto, Redwood City, and Menlo Park.

US Palo Alto officials confirmed that 12 intersections were affected and the audio systems have since been disabled.

While the crosswalk signals themselves remain operational, authorities are investigating how the hack was carried out. Similar issues are being addressed in nearby cities, with local governments moving quickly to secure the compromised systems.

The prank, which uses AI voice cloning, appears to layer these spoofed messages on top of the usual accessibility features rather than replacing them entirely.

Though clearly comedic in intent, the incident has raised concerns about the growing ease with which public systems can be manipulated using generative technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft users at risk from tax-themed cyberattack

As the US tax filing deadline of April 15 approaches, cybercriminals are ramping up phishing attacks designed to exploit the urgency many feel during this stressful period.

Windows users are particularly at risk, as attackers are targeting Microsoft account credentials by distributing emails disguised as tax-related reminders.

These emails include a PDF attachment titled ‘urgent reminder,’ which contains a malicious QR code. Once scanned, it leads users through fake bot protection and CAPTCHA checks before prompting them to enter their Microsoft login details, details that are then sent to a server controlled by criminals.

Security researchers, including Peter Arntz from Malwarebytes, warn that the email addresses in these fake login pages are already pre-filled, making it easier for unsuspecting victims to fall into the trap.

Entering your password at this stage could hand your credentials to malicious actors, possibly operating from Russia, who may exploit your account for maximum profit.

The form of attack takes advantage of both the ticking tax clock and the stress many feel trying to meet the deadline, encouraging impulsive and risky clicks.

Importantly, this threat is not limited to Windows users or those filing taxes by the April 15 deadline. As phishing techniques become more advanced through the use of AI and automated smartphone farms, similar scams are expected to persist well beyond tax season.

The IRS rarely contacts individuals via email and never to request sensitive information through links or attachments, so any such message should be treated with suspicion instead of trust.

To stay safe, users are urged to remain vigilant and avoid clicking on links or scanning codes from unsolicited emails. Instead of relying on emails for tax updates or returns, go directly to official websites.

The IRS offers resources to help recognise and report scams, and reviewing this guidance could be an essential step in protecting your personal information, not just today, but in the months ahead.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE experts warn on AI privacy risks in art apps

A surge in AI applications transforming selfies into Studio Ghibli-style artwork has captivated social media, but UAE cybersecurity experts are raising concerns over privacy and data misuse.

Dr Mohamed Al Kuwaiti, Head of Cybersecurity for the UAE Government, warned that engaging with unofficial apps could lead to breaches or leaks of personal data. He emphasised that while AI’s benefits are clear, users must understand how their personal data is handled by these platforms.

He called for strong cybersecurity standards across all digital platforms, urging individuals to be more cautious with their data.

Media professionals are also sounding alarms. Adel Al-Rashed, an Emirati journalist, cautioned that free apps often mimic trusted platforms but could exploit user data. He advised users to stick to verified applications, noting that paid services, like ChatGPT’s Pro edition, offer stronger privacy protections.

While acknowledging the risks, social media influencer Ibrahim Al-Thahli highlighted the excitement AI brings to creative expression. He urged users to focus on education and safe engagement with the technology, underscoring the UAE’s goal to build a resilient digital economy.

For more information on these topics, visit diplomacy.edu.

Victims of AI-driven sex crimes in Korea continue to grow

South Korea is facing a sharp rise in AI-related digital sex crimes, with deepfake pornography and online abuse increasingly affecting young women and children.

According to figures released by the Ministry of Gender Equality and Family and the Women’s Human Rights Institute, over 10,000 people sought help last year, marking a 14.7 percent increase from 2023.

Women made up more than 70 percent of those who contacted the Advocacy Center for Online Sexual Abuse Victims.

The majority were in their teens or twenties, with abuse often occurring via social media, messaging apps, and anonymous platforms. A growing portion of victims, including children under 10, were targeted due to the easy accessibility of AI tools.

The most frequently reported issue was ‘distribution anxiety,’ where victims feared the release of sensitive or manipulated videos, followed by blackmail and illegal filming.

Deepfake cases more than tripled in one year, with synthetic content often involving the use of female students’ images. In one notable incident, a university student and his peers used deepfake techniques to create explicit fake images of classmates and shared them on Telegram.

With over 300,000 pieces of illicit content removed in 2024, authorities warn that the majority of illegal websites are hosted overseas, complicating efforts to take down harmful material.

The South Korean government plans to strengthen its response by expanding educational outreach, supporting victims further, and implementing new laws to prevent secondary harm by allowing the removal of personal information alongside explicit images.

For more information on these topics, visit diplomacy.edu.

ChatGPT accused of enabling fake document creation

Concerns over digital security have intensified after reports revealed that OpenAI’s ChatGPT has been used to generate fake identification cards.

The incident follows the recent introduction of a popular Ghibli-style feature, which led to a sharp rise in usage and viral image generation across social platforms.

Among the fakes circulating online were forged versions of India’s Aadhaar ID, created with fabricated names, photos, and even QR codes.

While the Ghibli release helped push ChatGPT past 150 million active users, the tool’s advanced capabilities have now drawn criticism.

Some users demonstrated how the AI could replicate Aadhaar and PAN cards with surprising accuracy, even using images of well-known figures like OpenAI CEO Sam Altman and Tesla’s Elon Musk. The ease with which these near-perfect replicas were produced has raised alarms about identity theft and fraud.

The emergence of AI-generated IDs has reignited calls for clearer AI regulation and transparency. Critics are questioning how AI systems have access to the formatting of official documents, with accusations that sensitive datasets may be feeding model development.

As generative AI continues to evolve, pressure is mounting on both developers and regulators to address the growing risk of misuse.

For more information on these topics, visit diplomacy.edu.

DeepSeek highlights the risk of data misuse

The launch of DeepSeek, a Chinese-developed LLM, has reignited long-standing concerns about AI, national security, and industrial espionage.

While issues like data usage and bias remain central to AI discourse, DeepSeek’s origins in China have introduced deeper geopolitical anxieties. Echoing the scrutiny faced by TikTok, the model has raised fears of potential links to the Chinese state and its history of alleged cyber espionage.

With China and the US locked in a high-stakes AI race, every new model is now a strategic asset. DeepSeek’s emergence underscores the need for heightened vigilance around data protection, especially regarding sensitive business information and intellectual property.

Security experts warn that AI models may increasingly be trained using data acquired through dubious or illicit means, such as large-scale scraping or state-sponsored hacks.

The practice of data hoarding further complicates matters, as encrypted data today could be exploited in the future as decryption methods evolve.

Cybersecurity leaders are being urged to adapt to this evolving threat landscape. Beyond basic data visibility and access controls, there is growing emphasis on adopting privacy-enhancing technologies and encryption standards that can withstand future quantum threats.

Businesses must also recognise the strategic value of their data in an era where the lines between innovation, competition, and geopolitics have become dangerously blurred.

For more information on these topics, visit diplomacy.edu.

Blockchain app ARK fights to keep human creativity ahead of AI

Nearly 20 years after his AI career scare, screenwriter Ed Bennett-Coles and songwriter Jamie Hartman have developed ARK, a blockchain app designed to safeguard creative work from AI exploitation.

The platform lets artists register ownership of their ideas at every stage, from initial concept to final product, using biometric security and blockchain verification instead of traditional copyright systems.

ARK aims to protect human creativity in an AI-dominated world. ‘It’s about ring-fencing the creative process so artists can still earn a living,’ Hartman told AFP.

The app, backed by Claritas Capital and BMI, uses decentralised blockchain technology instead of centralised systems to give creators full control over their intellectual property.

Launching summer 2025, ARK challenges AI’s ‘growth at all costs’ mentality by emphasising creative journeys over end products.

Bennett-Coles compares AI content to online meat delivery, efficient but soulless, while human artistry resembles a grandfather’s butcher trip, where the experience matters as much as the result.

The duo hopes their solution will inspire industries to modernise copyright protections before AI erodes them completely.

For more information on these topics, visit diplomacy.edu.