The Declaration seeks to build a shared vision for AI that supports fair, inclusive, and sustainable global development. It is set to be officially adopted at the Hamburg Sustainability Conference in June 2025.
The initiative brings together voices from across sectors—governments, civil society, academia, and industry—to shape how AI can ethically and effectively align with the SDGs. Central to this effort is an open consultation process inviting stakeholders to provide feedback on the draft declaration, participate in expert discussions, and endorse its principles.
In addition to the declaration itself, the initiative also features the AI SDG Compendium, a global registry of AI projects contributing to sustainable development. The process has already gained visibility at major international forums like the Internet Governance Forum and the AI Action Summit in Paris, reflecting its growing significance in leveraging responsible AI for the SDGs.
The Declaration aims to ensure that AI is developed and used in ways that respect human rights, reduce inequalities, and foster sustainable progress. Establishing shared principles and promoting collaboration across sectors and regions sets a foundation for responsible AI that serves both people and the planet.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
From GPT-4 to 4.5: What has changed and why it matters
In March 2024, OpenAI released GPT-4.5, the latest iteration in its series of large language models (LLMs), pushing the boundaries of what machines can do with language understanding and generation. Building on the strengths of GPT-4, its successor, GPT-4.5, demonstrates improved reasoning capabilities, a more nuanced understanding of context, and smoother, more human-like interactions.
What sets GPT-4.5 apart from its predecessors is that it showcases refined alignment techniques, better memory over longer conversations, and increased control over tone, persona, and factual accuracy. Its ability to maintain coherent, emotionally resonant exchanges over extended dialogue marks a turning point in human-AI communication. These improvements are not just technical — they significantly affect the way we work, communicate, and relate to intelligent systems.
The increasing ability of GPT-4.5 to mimic human behaviour has raised a key question: Can it really fool us into thinking it is one of us? That question has recently been answered — and it has everything to do with the Turing Test.
The Turing Test: Origins, purpose, and modern relevance
In 1950, British mathematician and computer scientist Alan Turing posed a provocative question: ‘Can machines think?’ In his seminal paper ‘Computing Machinery and Intelligence,’ he proposed what would later become known as the Turing Test — a practical way of evaluating a machine’s ability to exhibit intelligent behaviour indistinguishable from that of a human.
In its simplest form, if a human evaluator cannot reliably distinguish between a human’s and a machine’s responses during a conversation, the machine is said to have passed the test. For decades, the Turing Test remained more of a philosophical benchmark than a practical one.
Early chatbots like ELIZA in the 1960s created the illusion of intelligence, but their scripted and shallow interactions fell far short of genuine human-like communication. Many researchers have questioned the test’s relevance as AI progressed, arguing that mimicking conversation is not the same as true understanding or consciousness.
Despite these criticisms, the Turing Test has endured — not as a definitive measure of machine intelligence, but rather as a cultural milestone and public barometer of AI progress. Today, the test has regained prominence with the emergence of models like GPT-4.5, which can hold complex, context-aware, emotionally intelligent conversations. What once seemed like a distant hypothetical is now an active, measurable challenge that GPT-4.5 has, by many accounts, overcome.
How GPT-4.5 fooled the judges: Inside the Turing Test study
In early 2025, a groundbreaking study conducted by researchers at the University of California, San Diego, provided the most substantial evidence yet that an AI could pass the Turing Test. In a controlled experiment involving over 500 participants, multiple conversational agents—including GPT-4.5, Meta’s LLaMa-3.1, and the classic chatbot ELIZA—were evaluated in blind text-based conversations. The participants were tasked with identifying whether they spoke to a human or a machine.
The results were astonishing: GPT-4.5 was judged to be human in 54% to 73% of interactions, depending on the scenario, surpassing the baseline for passing the Turing Test. In some cases, it outperformed actual human participants—who were correctly identified as human only 67% of the time.
That experiment marked the first time a contemporary AI model convincingly passed the Turing Test under rigorous scientific conditions. The study not only demonstrated the model’s technical capabilities—it also raised philosophical and ethical questions.
What does it mean for a machine to be ‘indistinguishable’ from a human? And more importantly, how should society respond to a world where AI can convincingly impersonate us?
Measuring up: GPT-4.5 vs LLaMa-3.1 and ELIZA
While GPT-4.5’s performance in the Turing Test has garnered much attention, its comparison with other models puts things into a clearer perspective. Meta’s LLaMa-3.1, a powerful and widely respected open-source model, also participated in the study.
It was identified as human in approximately 56% of interactions — a strong showing, although it fell just short of the commonly accepted benchmark to define a Turing Test pass. The result highlights how subtle conversational nuance and coherence differences can significantly influence perception.
The study also revisited ELIZA, the pioneering chatbot from the 1960s designed to mimic a psychotherapist. While historically significant, ELIZA’s simplistic, rule-based structure resulted in it being identified as non-human in most cases — around 77%. That stark contrast with modern models demonstrates how far natural language processing has progressed over the past six decades.
The comparative results underscore an important point: success in human-AI interaction today depends on language generation and the ability to adapt the tone, context, and emotional resonance. GPT-4.5’s edge seems to come not from mere fluency but from its ability to emulate the subtle cues of human reasoning and expression — a quality that left many test participants second-guessing whether they were even talking to a machine.
The power of persona: How character shaped perception
One of the most intriguing aspects of the UC San Diego study was how assigning specific personas to AI models significantly influenced participants’ perceptions. When GPT-4.5 was framed as an introverted, geeky 19-year-old college student, it consistently scored higher in being perceived as human than when it had no defined personality.
The seemingly small narrative detail was a powerful psychological cue that shaped how people interpreted its responses. The use of persona added a layer of realism to the conversation.
Slight awkwardness, informal phrasing, or quirky responses were not seen as flaws — they were consistent with the character. Participants were more likely to forgive or overlook certain imperfections if those quirks aligned with the model’s ‘personality’.
That finding reveals how intertwined identity and believability are in human communication, even when the identity is entirely artificial. The strategy also echoes something long known in storytelling and branding: people respond to characters, not just content.
In the context of AI, persona functions as a kind of narrative camouflage — not necessarily to deceive, but to disarm. It helps bridge the uncanny valley by offering users a familiar social framework. And as AI continues to evolve, it is clear that shaping how a model is perceived may be just as important as what the model is actually saying.
Limitations of the Turing Test: Beyond the illusion of intelligence
While passing the Turing Test has long been viewed as a milestone in AI, many experts argue that it is not the definitive measure of machine intelligence. The test focuses on imitation — whether an AI can appear human in conversation — rather than on genuine understanding, reasoning, or consciousness. In that sense, it is more about performance than true cognitive capability.
Critics point out that large language models like GPT-4.5 do not ‘understand’ language in the human sense – they generate text by predicting the most statistically probable next word based on patterns in massive datasets. That allows them to generate impressively coherent responses, but it does not equate to comprehension, self-awareness, or independent thought.
No matter how convincing, the illusion of intelligence is still an illusion — and mistaking it for something more can lead to misplaced trust or overreliance. Despite its symbolic power, the Turing Test was never meant to be the final word on AI.
As AI systems grow increasingly sophisticated, new benchmarks are needed — ones that assess linguistic mimicry, reasoning, ethical decision-making, and robustness in real-world environments. Passing the Turing Test may grab headlines, but the real test of intelligence lies far beyond the ability to talk like us.
Wider implications: Rethinking the role of AI in society
GPT-4.5’s success in the Turing Test does not just mark a technical achievement — it forces us to confront deeper societal questions. If AI can convincingly pass as a human in open conversation, what does that mean for trust, communication, and authenticity in our digital lives?
From customer service bots to AI-generated news anchors, the line between human and machine is blurring — and the implications are far from purely academic. These developments are challenging existing norms in areas such as journalism, education, healthcare, and even online dating.
How do we ensure transparency when AI is involved? Should AI be required to disclose its identity in every interaction? And how do we guard against malicious uses — such as deepfake conversations or synthetic personas designed to manipulate, mislead, or exploit?
On a broader level, the emergence of human-sounding AI invites a rethinking of agency and responsibility. If a machine can persuade, sympathise, or influence like a person — who is accountable when things go wrong?
As AI becomes more integrated into the human experience, society must evolve its frameworks not only for regulation and ethics but also for cultural adaptation. GPT-4.5 may have passed the Turing Test, but the test for us, as a society, is just beginning.
What comes next: Human-machine dialogue in the post-Turing era
With GPT-4.5 crossing the Turing threshold, we are no longer asking whether machines can talk like us — we are now asking what that means for how we speak, think, and relate to machines. That moment represents a paradigm shift: from testing the machine’s ability to imitate humans to understanding how humans will adapt to coexist with machines that no longer feel entirely artificial.
Future AI models will likely push this boundary even further — engaging in conversations that are not only coherent but also deeply contextual, emotionally attuned, and morally responsive. The bar for what feels ‘human’ in digital interaction is rising rapidly, and with it comes the need for new social norms, protocols, and perhaps even new literacies.
We will need to learn not only how to talk to machines but how to live with them — as collaborators, counterparts, and, in some cases, as reflections of ourselves. In the post-Turing era, the test is no longer whether machines can fool us — it is whether we can maintain clarity, responsibility, and humanity in a world where the artificial feels increasingly real.
GPT-4.5 may have passed a historic milestone, but the real story is just beginning — not one of machines becoming human, but of humans redefining what it means to be ourselves in dialogue with them.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta Platforms has announced it will begin using public posts, comments, and user interactions with its AI tools to train its AI models in the EU, instead of limiting training data to existing US-based inputs.
The move follows the recent European rollout of Meta AI, which had been delayed since June 2024 due to data privacy concerns raised by regulators. The company said EU users of Facebook and Instagram would receive notifications outlining how their data may be used, along with a link to opt out.
Meta clarified that while questions posed to its AI and public content from adult users may be used, private messages and data from under-18s would be excluded from training.
Instead of expanding quietly, the company is now making its plans public in an attempt to meet the EU’s transparency expectations.
The shift comes after Meta paused its original launch last year at the request of Ireland’s Data Protection Commission, which expressed concerns about using social media content for AI development. The move also drew criticism from advocacy group NOYB, which has urged regulators to intervene more decisively.
Meta joins a growing list of tech firms under scrutiny in Europe. Ireland’s privacy watchdog is already investigating Elon Musk’s X and Google for similar practices involving personal data use in AI model training.
Instead of treating such probes as isolated incidents, the EU appears to be setting a precedent that could reshape how global companies handle user data in AI development.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk’s X platform is under formal investigation by the Irish Data Protection Commission over its alleged use of public posts from EU users to train the Grok AI chatbot.
The probe is centred on whether X Internet Unlimited Company, the platform’s newly renamed Irish entity, has adhered to key GDPR principles while sharing publicly accessible data, like posts and interactions, with its affiliate xAI, which develops the chatbot.
Concerns have grown over the lack of explicit user consent, especially as other tech giants such as Meta signal similar data usage plans.
A move like this is part of a wider regulatory push in the EU to hold AI developers accountable instead of allowing unchecked experimentation. Experts note that many AI firms have deployed tools under a ‘build first, ask later’ mindset, an approach at odds with Europe’s strict data laws.
Should regulators conclude that public data still requires user consent, it could force a dramatic shift in how AI models are developed, not just in Europe but around the world.
Enterprises are now treading carefully. The investigation into X is already affecting AI adoption across the continent, with legal and reputational risks weighing heavily on decision-makers.
In one case, a Nordic bank halted its AI rollout midstream after its legal team couldn’t confirm whether European data had been used without proper disclosure. Instead of pushing ahead, the project was rebuilt using fully documented, EU-based training data.
The consequences could stretch far beyond the EU. Ireland’s probe might become a global benchmark for how governments view user consent in the age of data scraping and machine learning.
Instead of enforcement being region-specific, this investigation could inspire similar actions from regulators in places like Singapore and Canada. As AI continues to evolve, companies may have no choice but to adopt more transparent practices or face a rising tide of legal scrutiny.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta is expanding its ‘Teen Accounts’ feature to Facebook and Messenger following its initial launch on Instagram last September.
The rollout begins in the US, UK, Australia, and Canada, with plans to reach more countries soon.
These accounts are designed to give younger users an app experience with stronger safety measures, automatically activating restrictions to limit exposure to harmful content and interactions.
Teen users will be automatically placed in a more controlled environment that restricts who can message, comment, or tag them.
Only friends and previously contacted users can reach out via Messenger or see their stories, but tagging and mentions are also limited.
These settings require parental approval for any changes, and teens under 16 must have consent to alter key safety features.
On Instagram, Meta is introducing stricter safeguards. Users under 16 now need parental permission to go live or to turn off the tool that blurs images containing suspected nudity in direct messages.
Meta also implements reminders to limit screen time, prompting teens to log off after one hour and enabling overnight ‘Quiet mode’ to reduce late-night use.
The initiative follows increasing pressure on social media platforms to address concerns around teen mental health.
In recent years, US lawmakers and the Surgeon General have highlighted the risks associated with young users’ exposure to unregulated digital environments.
A study commissioned by Meta and Ipsos found that 94% of surveyed parents support Teen Accounts, with 85% saying the controls help ensure more positive online experiences for their children.
As digital safety continues to evolve as a priority, Meta’s expansion of Teen Accounts signals the willingness to build more accountable, youth-friendly online spaces across its platforms.
For more information on these topics, visit diplomacy.edu.
Meta has come under fire once again, this time over a new AI experiment on Instagram that suggests comments for users. Some users accused the company of using AI to inflate engagement metrics, potentially misleading advertisers and diminishing authentic user interaction.
The feature, spotted by test users, involves a pencil icon next to the comment bar on Instagram posts. Tapping it generates suggested replies based on the image’s content.
Meta has confirmed the feature is in testing but did not reveal plans for a broader launch. The company stated that it is exploring ways to incorporate Meta AI across different parts of its apps, including feeds, comments, groups, and search.
Public reaction has been largely negative, with concerns that AI-generated comments could flood the platform with inauthentic conversations. Social media users voiced fears of fake interactions replacing genuine ones, and some accused Meta of deceiving advertisers through inflated statistics.
Comparisons to dystopian scenarios were common, as users questioned the future of online social spaces.
This isn’t the first time Meta has faced backlash for its AI ventures. Previous attempts included AI personas modelled on celebrities and diverse identities, which were criticised for being disingenuous and engineered by largely homogenous development teams.
The future of AI-generated comments on Instagram remains uncertain as scrutiny continues to mount.
For more information on these topics, visit diplomacy.edu.
Han, who briefly held the position before being suspended in December, pledged to stabilise the country and prioritise national interests amid rising tensions over US trade policies.
The court’s decision returns Han to power during a time of heightened political instability, sparked by President Yoon Suk Yeol’s controversial declaration of martial law last year.
Yoon’s actions led to mass protests and a wave of impeachments, resignations, and criminal charges across the political spectrum.
While Yoon awaits a separate ruling and trial over charges of leading an insurrection, Han expressed gratitude to the court and vowed to put an end to ‘extreme confrontation in politics.’
As one of South Korea’s most experienced officials, Han’s return is seen as a move towards continuity in governance. He has served under five presidents from both major parties and is regarded as a figure capable of bridging political divides.
Despite opposition criticism that he failed to prevent Yoon’s martial law move, Han denied any wrongdoing and has committed to guiding South Korea through external economic challenges, especially those posed by the United States.
The court’s pending decision on President Yoon’s fate remains a focal point of national attention. Lee Jae-myung, leader of the opposition Democratic Party and a potential successor, has urged the court to act swiftly to end the uncertainty.
With rallies continuing across the country both in favour of and against Yoon, the outcome could trigger a snap election within 60 days if the president is removed.
For more information on these topics, visit diplomacy.edu.
Google has agreed to pay $28 million (€25.6 million) to settle a class action lawsuit alleging it favoured white and Asian employees by offering them higher pay and better career progression.
The case, which covered at least 6,632 employees in California between 2018 and 2024, won preliminary approval from a Santa Clara county judge last week.
The lawsuit was led by Ana Cantu, a former Google employee who claimed the company placed white and Asian workers in higher job levels while restricting promotions and pay increases for others.
Cantu, who worked in Google’s people operations and cloud departments for seven years, alleged she was denied career advancement despite performing well. She argued that Google’s practices violated the California Equal Pay Act.
A Google spokesperson confirmed the settlement but maintained that the company had not engaged in discriminatory treatment. A final hearing is scheduled for September, where the court will decide whether to grant full approval of the settlement.
For more information on these topics, visit diplomacy.edu.
The Trump administration has introduced a new app that allows undocumented migrants in the US to self-deport rather than risk arrest and detention.
The United States Customs and Border Protection (CBP) app, called CBP Home, includes an option for individuals to signal their ‘intent to depart.’ Homeland Security Secretary Kristi Noem said the app gives migrants a chance to leave voluntarily and potentially return legally in the future.
Noem warned that those who do not leave will face deportation and a lifetime ban from re-entering the country. The administration has stepped up pressure on undocumented migrants, with new regulations set to take effect in April requiring them to register with the government or face fines and jail time.
The launch of CBP Home follows Trump’s decision to shut down CBP One, a Biden-era app that allowed migrants in Mexico to schedule asylum appointments. The move left thousands of migrants stranded at the border with uncertain prospects.
Trump has pledged to carry out record deportations, although his administration’s current removal numbers lag behind those recorded under President Joe Biden.
The CBP Home app marks a shift in immigration policy, aiming to encourage voluntary departures while tightening enforcement measures against those who remain illegally.
For more information on these topics, visit diplomacy.edu.
India has repatriated nearly 300 of its citizens who were lured to Southeast Asian countries with fake job offers and forced into cybercrime and other fraudulent activities.
The rescue was coordinated by Indian embassies in Myanmar and Thailand, with an Indian Air Force aircraft bringing the workers back from Mae Sot in Thailand. Many had been trapped in scam centres along the Thailand-Myanmar border, where criminal networks operate large-scale online fraud schemes.
Authorities in Thailand have intensified their crackdown on these illegal operations, arresting 100 people last week. Countries including China and Indonesia have also been working to bring back their nationals who were similarly deceived.
According to the United Nations, criminal syndicates have trafficked hundreds of thousands of people to these centres, generating billions of dollars from online scams.
The government of India has warned its citizens against falling prey to fraudulent job offers and urged them to verify employers and recruitment agents before accepting positions abroad.
Officials continue to collaborate with international agencies to combat human trafficking and cyber fraud, aiming to prevent further exploitation of vulnerable workers.
For more information on these topics, visit diplomacy.edu.