The AI race between China and the USA shifts to classrooms. As AI governance expert Jovan Kurbalija highlights in his analysis of global AI strategies, two countries see AI literacy as a ‘strategic imperative’. From President Trump’s executive order to advance AI education to China’s new AI education strategy, both superpowers are betting big on nurturing homegrown AI talent.
Kurbalija sees focus on AI education as a rare bright spot in increasingly fractured tech geopolitics: ‘When students in Shanghai debug code alongside peers in Silicon Valley via open-source platforms, they’re not just building algorithms—they’re building trust.’
This grassroots collaboration, he argues, could soften the edges of emerging AI nationalism and support new types of digital and AI diplomacy.
He concludes that the latest AI education initiatives are ‘not just about who wins the AI race but, even more importantly, how we prepare humanity for the forthcoming AI transformation and coexistence with advanced technologies.’
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A team of researchers at Mount Sinai Hospital in New York has successfully calibrated an AI tool to more accurately assess the likelihood of hypertrophic cardiomyopathy (HCM) in patients.
By assigning specific probability scores, the AI model now offers clearer guidance to clinicians and patients regarding disease risk.
HCM, a thickening of the heart muscle that affects around one in 200 people globally, can lead to serious complications such as heart failure or sudden cardiac death.
The Viz HCM algorithm, already approved by the US Food and Drug Administration, previously provided vague classifications like ‘suspected HCM.’ Thanks to model calibration, clinicians can now give patients more precise estimates—for instance, a 60% probability of having the condition.
Researchers ran the algorithm on nearly 71,000 patients who had undergone electrocardiograms between March 2023 and January 2024. Out of these, 1,522 were flagged by the AI, with further review of medical records and imaging confirming diagnoses.
The results validated that the newly calibrated probabilities closely reflected real-world outcomes, improving the tool’s accuracy and practical utility.
Experts say this advancement enhances clinical workflows by helping doctors prioritise patients based on their actual risk levels.
Beyond technological innovation, the study marks a step forward in integrating AI responsibly into everyday clinical practice—making healthcare more personalised, interpretable, and effective.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In his thought-provoking blog post ‘Politeness in 2025: Why are we so kind to AI?’, Dr Jovan Kurbalija explores why nearly 80% of users in the UK and the USA instinctively say ‘please’ and ‘thank you’ to AI platforms like ChatGPT.
While machines lack feelings, our politeness reveals more about human psychology and cultural habits than the technology itself. For many, courtesy is a deeply ingrained reflex shaped by personality traits such as agreeableness and lifelong social conditioning, extending kindness even to non-sentient entities.
However, not everyone shares this approach. Some users are driven by subtle fears of future AI dominance, using politeness as a safeguard, while others prioritise efficiency, viewing AI purely as a tool undeserving of social niceties.
A rational minority dismisses politeness altogether, recognising AI as nothing more than code. Dr Kurbalija highlights that these varied responses reflect how we perceive and interact with technology, influenced by both evolutionary instincts and modern cognitive biases.
Beyond individual behaviour, Kurbalija points to a deeper issue: our tendency to humanise AI and expect it to behave like us, unlike traditional machines. This blurring of lines between tool and teammate raises important questions about how our perceptions shape AI’s role in society.
Ultimately, he suggests that politeness toward AI isn’t about the machine—it reflects the kind of humans we aspire to be, preserving empathy and grace in an increasingly digital world.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In his latest blog, part of a series expanding on ‘Don’t Waste the Crisis: How AI Can Help Reinvent International Geneva’, Dr Jovan Kurbalija explores how linguists shift from fearing AI to embracing a new era of opportunity. Geneva, home to over a thousand translators and interpreters, has felt the pressure as AI tools like ChatGPT began automating language tasks.
Yet, rather than rendering linguists obsolete, AI is transforming their role, highlighting the enduring importance of human expertise in bridging syntax and semantics—AI’s persistent blind spot. Dr Kurbalija emphasises that while AI excels at recognising patterns, it often fails to grasp meaning, nuance, and cultural context.
This is where linguists step in, offering critical value by enhancing AI’s understanding of language beyond mere structure. From supporting low-resource languages to ensuring ethical AI outputs in sensitive fields like law and diplomacy, linguists are positioned as key players in shaping responsible and context-aware AI systems.
Calling for adaptation over resistance, Dr Kurbalija advocates for linguists to upskill, specialise in areas where human judgement is irreplaceable, collaborate with AI developers, and champion ethical standards. Rather than facing decline, the linguistic profession is entering a renaissance, where embracing syntax and semantics ensures that AI amplifies human expression instead of diminishing it.
With Geneva’s vibrant multilingual community at the forefront, linguists have a pivotal role in guiding how language and technology evolve together in this new frontier.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Declaration seeks to build a shared vision for AI that supports fair, inclusive, and sustainable global development. It is set to be officially adopted at the Hamburg Sustainability Conference in June 2025.
The initiative brings together voices from across sectors—governments, civil society, academia, and industry—to shape how AI can ethically and effectively align with the SDGs. Central to this effort is an open consultation process inviting stakeholders to provide feedback on the draft declaration, participate in expert discussions, and endorse its principles.
In addition to the declaration itself, the initiative also features the AI SDG Compendium, a global registry of AI projects contributing to sustainable development. The process has already gained visibility at major international forums like the Internet Governance Forum and the AI Action Summit in Paris, reflecting its growing significance in leveraging responsible AI for the SDGs.
The Declaration aims to ensure that AI is developed and used in ways that respect human rights, reduce inequalities, and foster sustainable progress. Establishing shared principles and promoting collaboration across sectors and regions sets a foundation for responsible AI that serves both people and the planet.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
From GPT-4 to 4.5: What has changed and why it matters
In March 2024, OpenAI released GPT-4.5, the latest iteration in its series of large language models (LLMs), pushing the boundaries of what machines can do with language understanding and generation. Building on the strengths of GPT-4, its successor, GPT-4.5, demonstrates improved reasoning capabilities, a more nuanced understanding of context, and smoother, more human-like interactions.
What sets GPT-4.5 apart from its predecessors is that it showcases refined alignment techniques, better memory over longer conversations, and increased control over tone, persona, and factual accuracy. Its ability to maintain coherent, emotionally resonant exchanges over extended dialogue marks a turning point in human-AI communication. These improvements are not just technical — they significantly affect the way we work, communicate, and relate to intelligent systems.
The increasing ability of GPT-4.5 to mimic human behaviour has raised a key question: Can it really fool us into thinking it is one of us? That question has recently been answered — and it has everything to do with the Turing Test.
The Turing Test: Origins, purpose, and modern relevance
In 1950, British mathematician and computer scientist Alan Turing posed a provocative question: ‘Can machines think?’ In his seminal paper ‘Computing Machinery and Intelligence,’ he proposed what would later become known as the Turing Test — a practical way of evaluating a machine’s ability to exhibit intelligent behaviour indistinguishable from that of a human.
In its simplest form, if a human evaluator cannot reliably distinguish between a human’s and a machine’s responses during a conversation, the machine is said to have passed the test. For decades, the Turing Test remained more of a philosophical benchmark than a practical one.
Early chatbots like ELIZA in the 1960s created the illusion of intelligence, but their scripted and shallow interactions fell far short of genuine human-like communication. Many researchers have questioned the test’s relevance as AI progressed, arguing that mimicking conversation is not the same as true understanding or consciousness.
Despite these criticisms, the Turing Test has endured — not as a definitive measure of machine intelligence, but rather as a cultural milestone and public barometer of AI progress. Today, the test has regained prominence with the emergence of models like GPT-4.5, which can hold complex, context-aware, emotionally intelligent conversations. What once seemed like a distant hypothetical is now an active, measurable challenge that GPT-4.5 has, by many accounts, overcome.
How GPT-4.5 fooled the judges: Inside the Turing Test study
In early 2025, a groundbreaking study conducted by researchers at the University of California, San Diego, provided the most substantial evidence yet that an AI could pass the Turing Test. In a controlled experiment involving over 500 participants, multiple conversational agents—including GPT-4.5, Meta’s LLaMa-3.1, and the classic chatbot ELIZA—were evaluated in blind text-based conversations. The participants were tasked with identifying whether they spoke to a human or a machine.
The results were astonishing: GPT-4.5 was judged to be human in 54% to 73% of interactions, depending on the scenario, surpassing the baseline for passing the Turing Test. In some cases, it outperformed actual human participants—who were correctly identified as human only 67% of the time.
That experiment marked the first time a contemporary AI model convincingly passed the Turing Test under rigorous scientific conditions. The study not only demonstrated the model’s technical capabilities—it also raised philosophical and ethical questions.
What does it mean for a machine to be ‘indistinguishable’ from a human? And more importantly, how should society respond to a world where AI can convincingly impersonate us?
Measuring up: GPT-4.5 vs LLaMa-3.1 and ELIZA
While GPT-4.5’s performance in the Turing Test has garnered much attention, its comparison with other models puts things into a clearer perspective. Meta’s LLaMa-3.1, a powerful and widely respected open-source model, also participated in the study.
It was identified as human in approximately 56% of interactions — a strong showing, although it fell just short of the commonly accepted benchmark to define a Turing Test pass. The result highlights how subtle conversational nuance and coherence differences can significantly influence perception.
The study also revisited ELIZA, the pioneering chatbot from the 1960s designed to mimic a psychotherapist. While historically significant, ELIZA’s simplistic, rule-based structure resulted in it being identified as non-human in most cases — around 77%. That stark contrast with modern models demonstrates how far natural language processing has progressed over the past six decades.
The comparative results underscore an important point: success in human-AI interaction today depends on language generation and the ability to adapt the tone, context, and emotional resonance. GPT-4.5’s edge seems to come not from mere fluency but from its ability to emulate the subtle cues of human reasoning and expression — a quality that left many test participants second-guessing whether they were even talking to a machine.
The power of persona: How character shaped perception
One of the most intriguing aspects of the UC San Diego study was how assigning specific personas to AI models significantly influenced participants’ perceptions. When GPT-4.5 was framed as an introverted, geeky 19-year-old college student, it consistently scored higher in being perceived as human than when it had no defined personality.
The seemingly small narrative detail was a powerful psychological cue that shaped how people interpreted its responses. The use of persona added a layer of realism to the conversation.
Slight awkwardness, informal phrasing, or quirky responses were not seen as flaws — they were consistent with the character. Participants were more likely to forgive or overlook certain imperfections if those quirks aligned with the model’s ‘personality’.
That finding reveals how intertwined identity and believability are in human communication, even when the identity is entirely artificial. The strategy also echoes something long known in storytelling and branding: people respond to characters, not just content.
In the context of AI, persona functions as a kind of narrative camouflage — not necessarily to deceive, but to disarm. It helps bridge the uncanny valley by offering users a familiar social framework. And as AI continues to evolve, it is clear that shaping how a model is perceived may be just as important as what the model is actually saying.
Limitations of the Turing Test: Beyond the illusion of intelligence
While passing the Turing Test has long been viewed as a milestone in AI, many experts argue that it is not the definitive measure of machine intelligence. The test focuses on imitation — whether an AI can appear human in conversation — rather than on genuine understanding, reasoning, or consciousness. In that sense, it is more about performance than true cognitive capability.
Critics point out that large language models like GPT-4.5 do not ‘understand’ language in the human sense – they generate text by predicting the most statistically probable next word based on patterns in massive datasets. That allows them to generate impressively coherent responses, but it does not equate to comprehension, self-awareness, or independent thought.
No matter how convincing, the illusion of intelligence is still an illusion — and mistaking it for something more can lead to misplaced trust or overreliance. Despite its symbolic power, the Turing Test was never meant to be the final word on AI.
As AI systems grow increasingly sophisticated, new benchmarks are needed — ones that assess linguistic mimicry, reasoning, ethical decision-making, and robustness in real-world environments. Passing the Turing Test may grab headlines, but the real test of intelligence lies far beyond the ability to talk like us.
Wider implications: Rethinking the role of AI in society
GPT-4.5’s success in the Turing Test does not just mark a technical achievement — it forces us to confront deeper societal questions. If AI can convincingly pass as a human in open conversation, what does that mean for trust, communication, and authenticity in our digital lives?
From customer service bots to AI-generated news anchors, the line between human and machine is blurring — and the implications are far from purely academic. These developments are challenging existing norms in areas such as journalism, education, healthcare, and even online dating.
How do we ensure transparency when AI is involved? Should AI be required to disclose its identity in every interaction? And how do we guard against malicious uses — such as deepfake conversations or synthetic personas designed to manipulate, mislead, or exploit?
On a broader level, the emergence of human-sounding AI invites a rethinking of agency and responsibility. If a machine can persuade, sympathise, or influence like a person — who is accountable when things go wrong?
As AI becomes more integrated into the human experience, society must evolve its frameworks not only for regulation and ethics but also for cultural adaptation. GPT-4.5 may have passed the Turing Test, but the test for us, as a society, is just beginning.
What comes next: Human-machine dialogue in the post-Turing era
With GPT-4.5 crossing the Turing threshold, we are no longer asking whether machines can talk like us — we are now asking what that means for how we speak, think, and relate to machines. That moment represents a paradigm shift: from testing the machine’s ability to imitate humans to understanding how humans will adapt to coexist with machines that no longer feel entirely artificial.
Future AI models will likely push this boundary even further — engaging in conversations that are not only coherent but also deeply contextual, emotionally attuned, and morally responsive. The bar for what feels ‘human’ in digital interaction is rising rapidly, and with it comes the need for new social norms, protocols, and perhaps even new literacies.
We will need to learn not only how to talk to machines but how to live with them — as collaborators, counterparts, and, in some cases, as reflections of ourselves. In the post-Turing era, the test is no longer whether machines can fool us — it is whether we can maintain clarity, responsibility, and humanity in a world where the artificial feels increasingly real.
GPT-4.5 may have passed a historic milestone, but the real story is just beginning — not one of machines becoming human, but of humans redefining what it means to be ourselves in dialogue with them.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
International Geneva is at a crossroads. With mounting budget cuts, declining trust in multilateralism, and growing geopolitical tensions, the city’s role as a hub for global cooperation is under threat.
In his thought-provoking blog, ‘Don’t waste the crisis: How AI can help reinvent International Geneva’, Jovan Kurbalija, Executive Director of Diplo, argues that AI could offer a way forward—not as a mere technological upgrade but as a strategic tool for transforming the city’s institutions and reviving its humanitarian spirit. Kurbalija envisions AI as a means to re-skill Geneva’s workforce, modernise its organisations, and preserve its vast yet fragmented knowledge base.
With professions such as translators, lawyers, and social scientists potentially playing pivotal roles in shaping AI tools, the city can harness its multilingual, highly educated population for a new kind of innovation. A bottom-up approach is key: practical steps like AI apprenticeships, micro-learning platforms, and ‘AI sandboxes’ would help institutions adapt at their own pace while avoiding the pitfalls of top-down tech imposition.
Organisations must also rethink how they operate. AI offers the chance to cut red tape, lighten the administrative burden on NGOs, and flatten outdated hierarchies in favour of more agile, data-driven decision-making.
At the same time, Geneva can lead by example in ethical AI governance—by ensuring accountability, protecting human rights and knowledge, and defending what Kurbalija calls our ‘right to imperfection’ in an increasingly optimised world. Ultimately, Geneva’s challenge is not technological—it’s organisational.
As AI tools become cheaper and more accessible, the real work lies in how institutions and communities embrace change. Kurbalija proposes a dedicated Geneva AI Fund to support apprenticeships, ethical projects, and local initiatives. He argues that this crisis could be Geneva’s opportunity to reinvent itself for survival and to inspire a global model of human-centred AI governance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
President Donald Trump’s administration has granted exemptions from steep tariffs on smartphones, laptops, and other electronics, providing relief to tech giants like Apple and Dell.
Announced on 5 April 2025 by US Customs and Border Protection, the exemptions cover 20 product categories, including semiconductors, and exclude these goods from Trump’s 10% baseline tariffs on non-Chinese imports, easing costs for items like iPhones made in India.
Wedbush Securities analyst Dan Ives hailed the move as ‘the most bullish news’ for the tech sector, coinciding with efforts by companies like Apple, which has shipped 1.5 million iPhones from India to sidestep tariffs.
However, the exemptions don’t fully shield tech from Trump’s trade war. His 125% reciprocal tariffs on Chinese imports remain, alongside earlier 20% duties tied to the fentanyl crisis, and a new national security probe into semiconductors looms.
Trump, speaking on 9 April, teased more details while claiming the US is reaping tariff revenue, but the decision hints at his awareness of inflation risks, with iPhone prices potentially hitting $2,300 under full tariffs.
The partial reprieve reflects Trump’s balancing act between trade promises and economic stability, especially after his campaign focused on lowering prices amid inflation concerns.
The backdrop is a volatile global market, with China retaliating by matching Trump’s 125% tariffs, sending US stocks on a rollercoaster and pushing gold to record highs.
Trump’s cosy ties with tech CEOs like Apple’s Tim Cook, who have embraced him since his 20 January inauguration, contrast with his tariff-driven agenda, which has sparked recession fears and Republican criticism ahead of next year’s midterms.
The exemptions offer tech a breather, but the broader US-China trade conflict threatens supply chains and global stability.
This tariff carve-out underscores Trump’s high-stakes gamble: reshaping trade to favour American interests while risking economic fallout at home.
With smartphones and laptops leading US imports from China at $41.7 billion and $33.1 billion in 2024, the exemptions may temper consumer price hikes, but the looming semiconductor probe and escalating tensions signal more turbulence ahead.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China and Russia have reportedly started using Bitcoin for settling certain energy transactions. It is a development that signals a shift away from the US dollar in global trade.
The move comes amid growing trade tensions and increasing interest in decentralised digital assets. According to Matthew Sigel, Head of Digital Assets Research at VanEck, Bitcoin’s role in trade is evolving beyond speculation.
The report highlights a growing trend of using digital assets in practical commerce, particularly in energy markets. Bitcoin’s neutral and decentralised nature makes it an appealing option for countries facing financial restrictions.
The shift may reinforce Bitcoin’s role as a hedge against monetary instability as international players are seeking alternative settlement methods.
Bolivia also plans to use cryptocurrency for power imports, while EDF is exploring Bitcoin mining to monetise surplus electricity.
For more information on these topics, visit diplomacy.edu.
Nissan Motor has partnered with UK-based AI company Wayve to develop the next generation of its autonomous driving technology, marking the first time a major automaker has publicly backed the start-up.
The carmaker intends to integrate Wayve’s AI Driver software into its ProPilot system, with a launch targeted for its fiscal year 2027, ending in March 2028.
Wayve claims the AI Driver platform, built on its embodied AI foundation model, will significantly enhance collision avoidance and overall safety.
Designed to navigate complex real-world conditions in a human-like way, the software will work in tandem with next-generation Lidar to deliver a more advanced driver assistance system.
The collaboration follows a $1.1 billion Series-C funding round led by SoftBank in 2024, which also saw support from Microsoft and NVIDIA.
Nissan’s endorsement signals a major leap forward for Wayve’s technology, as the race to commercialise autonomous driving intensifies across the automotive industry.
For more information on these topics, visit diplomacy.edu.