The Children’s Commissioner has urged the UK Government to ban AI apps that create sexually explicit images through “nudification” technology. AI tools capable of manipulating real photos to make people appear naked are being used to target children.
Concerns in the UK are growing as these apps are now widely accessible online, often through social media and search platforms. In a newly published report, Dame Rachel warned that children, particularly girls, are altering their online behaviour out of fear of becoming victims of such technologies.
She stressed that while AI holds great potential, it also poses serious risks to children’s safety. The report also recommends stronger legal duties for AI developers and improved systems to remove explicit deepfake content from the internet.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Cancer remains one of the leading causes of death worldwide, with nearly 20 million new cases and 9.7 million deaths recorded in 2022.
In response, Japanese startup Craif, spun off from Nagoya University in 2018, is developing an AI-powered early cancer detection software using microRNA (miRNA) instead of relying on traditional methods.
The company has just raised $22 million in Series C funding, bringing its total to $57 million, with plans to expand into the US market and strengthen its research and development efforts.
Craif was founded after co-founder and CEO Ryuichi Onose experienced the impact of cancer within his own family. Partnering with associate professor Takao Yasui, who had discovered a new technique for early cancer detection using urinary biomarkers, the company created a non-invasive urine-based test.
Instead of invasive blood tests, Craif’s technology allows patients to detect cancers as early as Stage 1 from the comfort of their own homes, making regular screening more accessible and less daunting.
Unlike competitors who depend on cell-free DNA (cfDNA), Craif uses microRNA, a biomarker known for its strong link to early cancer biology. Urine is chosen instead of blood because it contains fewer impurities, offering clearer signals and reducing measurement errors.
Craif’s first product, miSignal, which tests for seven different types of cancers, is already on the market in Japan and has attracted around 20,000 users through clinics, pharmacies, direct sales, and corporate wellness programmes.
The new funding will enable Craif to enter the US market, complete clinical trials by 2029, and seek FDA approval. It also plans to expand its detection capabilities to cover ten types of cancers this year and explore applications for other conditions like dementia instead of limiting its technology to cancer alone.
With a growing presence in California and partnerships with dozens of US medical institutions, Craif is positioning itself as a major player in the future of early disease detection.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The AI race between China and the USA shifts to classrooms. As AI governance expert Jovan Kurbalija highlights in his analysis of global AI strategies, two countries see AI literacy as a ‘strategic imperative’. From President Trump’s executive order to advance AI education to China’s new AI education strategy, both superpowers are betting big on nurturing homegrown AI talent.
Kurbalija sees focus on AI education as a rare bright spot in increasingly fractured tech geopolitics: ‘When students in Shanghai debug code alongside peers in Silicon Valley via open-source platforms, they’re not just building algorithms—they’re building trust.’
This grassroots collaboration, he argues, could soften the edges of emerging AI nationalism and support new types of digital and AI diplomacy.
He concludes that the latest AI education initiatives are ‘not just about who wins the AI race but, even more importantly, how we prepare humanity for the forthcoming AI transformation and coexistence with advanced technologies.’
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A controversial new startup called Cluely has secured $5.3 million in seed funding to expand its AI-powered tool designed to help users ‘cheat on everything,’ from job interviews to exams.
Founded by 21-year-old Chungin ‘Roy’ Lee and Neel Shanmugam—both former Columbia University students—the tool works via a hidden browser window that remains invisible to interviewers or test supervisors.
The project began as ‘Interview Coder,’ originally intended to help users pass technical coding interviews on platforms like LeetCode.
Both founders faced disciplinary action at Columbia over the tool, eventually dropping out of the university. Despite ethical concerns, Cluely claims its technology has already surpassed $3 million in annual recurring revenue.
The company has drawn comparisons between its tool and past innovations like the calculator and spellcheck, arguing that it challenges outdated norms in the same way. A viral launch video showing Lee using Cluely on a date sparked backlash, with critics likening it to a scene from Black Mirror.
Cluely’s mission has sparked widespread debate over the use of AI in high-stakes settings. While some applaud its bold approach, others worry it promotes dishonesty.
Amazon, where Lee reportedly landed an internship using the tool, declined to comment on the case directly but reiterated that candidates must agree not to use unauthorised tools during the hiring process.
The startup’s rise comes amid growing concern over how AI may be used—or misused—in both professional and personal spheres.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Declaration seeks to build a shared vision for AI that supports fair, inclusive, and sustainable global development. It is set to be officially adopted at the Hamburg Sustainability Conference in June 2025.
The initiative brings together voices from across sectors—governments, civil society, academia, and industry—to shape how AI can ethically and effectively align with the SDGs. Central to this effort is an open consultation process inviting stakeholders to provide feedback on the draft declaration, participate in expert discussions, and endorse its principles.
In addition to the declaration itself, the initiative also features the AI SDG Compendium, a global registry of AI projects contributing to sustainable development. The process has already gained visibility at major international forums like the Internet Governance Forum and the AI Action Summit in Paris, reflecting its growing significance in leveraging responsible AI for the SDGs.
The Declaration aims to ensure that AI is developed and used in ways that respect human rights, reduce inequalities, and foster sustainable progress. Establishing shared principles and promoting collaboration across sectors and regions sets a foundation for responsible AI that serves both people and the planet.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
From GPT-4 to 4.5: What has changed and why it matters
In March 2024, OpenAI released GPT-4.5, the latest iteration in its series of large language models (LLMs), pushing the boundaries of what machines can do with language understanding and generation. Building on the strengths of GPT-4, its successor, GPT-4.5, demonstrates improved reasoning capabilities, a more nuanced understanding of context, and smoother, more human-like interactions.
What sets GPT-4.5 apart from its predecessors is that it showcases refined alignment techniques, better memory over longer conversations, and increased control over tone, persona, and factual accuracy. Its ability to maintain coherent, emotionally resonant exchanges over extended dialogue marks a turning point in human-AI communication. These improvements are not just technical — they significantly affect the way we work, communicate, and relate to intelligent systems.
The increasing ability of GPT-4.5 to mimic human behaviour has raised a key question: Can it really fool us into thinking it is one of us? That question has recently been answered — and it has everything to do with the Turing Test.
The Turing Test: Origins, purpose, and modern relevance
In 1950, British mathematician and computer scientist Alan Turing posed a provocative question: ‘Can machines think?’ In his seminal paper ‘Computing Machinery and Intelligence,’ he proposed what would later become known as the Turing Test — a practical way of evaluating a machine’s ability to exhibit intelligent behaviour indistinguishable from that of a human.
In its simplest form, if a human evaluator cannot reliably distinguish between a human’s and a machine’s responses during a conversation, the machine is said to have passed the test. For decades, the Turing Test remained more of a philosophical benchmark than a practical one.
Early chatbots like ELIZA in the 1960s created the illusion of intelligence, but their scripted and shallow interactions fell far short of genuine human-like communication. Many researchers have questioned the test’s relevance as AI progressed, arguing that mimicking conversation is not the same as true understanding or consciousness.
Despite these criticisms, the Turing Test has endured — not as a definitive measure of machine intelligence, but rather as a cultural milestone and public barometer of AI progress. Today, the test has regained prominence with the emergence of models like GPT-4.5, which can hold complex, context-aware, emotionally intelligent conversations. What once seemed like a distant hypothetical is now an active, measurable challenge that GPT-4.5 has, by many accounts, overcome.
How GPT-4.5 fooled the judges: Inside the Turing Test study
In early 2025, a groundbreaking study conducted by researchers at the University of California, San Diego, provided the most substantial evidence yet that an AI could pass the Turing Test. In a controlled experiment involving over 500 participants, multiple conversational agents—including GPT-4.5, Meta’s LLaMa-3.1, and the classic chatbot ELIZA—were evaluated in blind text-based conversations. The participants were tasked with identifying whether they spoke to a human or a machine.
The results were astonishing: GPT-4.5 was judged to be human in 54% to 73% of interactions, depending on the scenario, surpassing the baseline for passing the Turing Test. In some cases, it outperformed actual human participants—who were correctly identified as human only 67% of the time.
That experiment marked the first time a contemporary AI model convincingly passed the Turing Test under rigorous scientific conditions. The study not only demonstrated the model’s technical capabilities—it also raised philosophical and ethical questions.
What does it mean for a machine to be ‘indistinguishable’ from a human? And more importantly, how should society respond to a world where AI can convincingly impersonate us?
Measuring up: GPT-4.5 vs LLaMa-3.1 and ELIZA
While GPT-4.5’s performance in the Turing Test has garnered much attention, its comparison with other models puts things into a clearer perspective. Meta’s LLaMa-3.1, a powerful and widely respected open-source model, also participated in the study.
It was identified as human in approximately 56% of interactions — a strong showing, although it fell just short of the commonly accepted benchmark to define a Turing Test pass. The result highlights how subtle conversational nuance and coherence differences can significantly influence perception.
The study also revisited ELIZA, the pioneering chatbot from the 1960s designed to mimic a psychotherapist. While historically significant, ELIZA’s simplistic, rule-based structure resulted in it being identified as non-human in most cases — around 77%. That stark contrast with modern models demonstrates how far natural language processing has progressed over the past six decades.
The comparative results underscore an important point: success in human-AI interaction today depends on language generation and the ability to adapt the tone, context, and emotional resonance. GPT-4.5’s edge seems to come not from mere fluency but from its ability to emulate the subtle cues of human reasoning and expression — a quality that left many test participants second-guessing whether they were even talking to a machine.
The power of persona: How character shaped perception
One of the most intriguing aspects of the UC San Diego study was how assigning specific personas to AI models significantly influenced participants’ perceptions. When GPT-4.5 was framed as an introverted, geeky 19-year-old college student, it consistently scored higher in being perceived as human than when it had no defined personality.
The seemingly small narrative detail was a powerful psychological cue that shaped how people interpreted its responses. The use of persona added a layer of realism to the conversation.
Slight awkwardness, informal phrasing, or quirky responses were not seen as flaws — they were consistent with the character. Participants were more likely to forgive or overlook certain imperfections if those quirks aligned with the model’s ‘personality’.
That finding reveals how intertwined identity and believability are in human communication, even when the identity is entirely artificial. The strategy also echoes something long known in storytelling and branding: people respond to characters, not just content.
In the context of AI, persona functions as a kind of narrative camouflage — not necessarily to deceive, but to disarm. It helps bridge the uncanny valley by offering users a familiar social framework. And as AI continues to evolve, it is clear that shaping how a model is perceived may be just as important as what the model is actually saying.
Limitations of the Turing Test: Beyond the illusion of intelligence
While passing the Turing Test has long been viewed as a milestone in AI, many experts argue that it is not the definitive measure of machine intelligence. The test focuses on imitation — whether an AI can appear human in conversation — rather than on genuine understanding, reasoning, or consciousness. In that sense, it is more about performance than true cognitive capability.
Critics point out that large language models like GPT-4.5 do not ‘understand’ language in the human sense – they generate text by predicting the most statistically probable next word based on patterns in massive datasets. That allows them to generate impressively coherent responses, but it does not equate to comprehension, self-awareness, or independent thought.
No matter how convincing, the illusion of intelligence is still an illusion — and mistaking it for something more can lead to misplaced trust or overreliance. Despite its symbolic power, the Turing Test was never meant to be the final word on AI.
As AI systems grow increasingly sophisticated, new benchmarks are needed — ones that assess linguistic mimicry, reasoning, ethical decision-making, and robustness in real-world environments. Passing the Turing Test may grab headlines, but the real test of intelligence lies far beyond the ability to talk like us.
Wider implications: Rethinking the role of AI in society
GPT-4.5’s success in the Turing Test does not just mark a technical achievement — it forces us to confront deeper societal questions. If AI can convincingly pass as a human in open conversation, what does that mean for trust, communication, and authenticity in our digital lives?
From customer service bots to AI-generated news anchors, the line between human and machine is blurring — and the implications are far from purely academic. These developments are challenging existing norms in areas such as journalism, education, healthcare, and even online dating.
How do we ensure transparency when AI is involved? Should AI be required to disclose its identity in every interaction? And how do we guard against malicious uses — such as deepfake conversations or synthetic personas designed to manipulate, mislead, or exploit?
On a broader level, the emergence of human-sounding AI invites a rethinking of agency and responsibility. If a machine can persuade, sympathise, or influence like a person — who is accountable when things go wrong?
As AI becomes more integrated into the human experience, society must evolve its frameworks not only for regulation and ethics but also for cultural adaptation. GPT-4.5 may have passed the Turing Test, but the test for us, as a society, is just beginning.
What comes next: Human-machine dialogue in the post-Turing era
With GPT-4.5 crossing the Turing threshold, we are no longer asking whether machines can talk like us — we are now asking what that means for how we speak, think, and relate to machines. That moment represents a paradigm shift: from testing the machine’s ability to imitate humans to understanding how humans will adapt to coexist with machines that no longer feel entirely artificial.
Future AI models will likely push this boundary even further — engaging in conversations that are not only coherent but also deeply contextual, emotionally attuned, and morally responsive. The bar for what feels ‘human’ in digital interaction is rising rapidly, and with it comes the need for new social norms, protocols, and perhaps even new literacies.
We will need to learn not only how to talk to machines but how to live with them — as collaborators, counterparts, and, in some cases, as reflections of ourselves. In the post-Turing era, the test is no longer whether machines can fool us — it is whether we can maintain clarity, responsibility, and humanity in a world where the artificial feels increasingly real.
GPT-4.5 may have passed a historic milestone, but the real story is just beginning — not one of machines becoming human, but of humans redefining what it means to be ourselves in dialogue with them.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta Platforms has announced it will begin using public posts, comments, and user interactions with its AI tools to train its AI models in the EU, instead of limiting training data to existing US-based inputs.
The move follows the recent European rollout of Meta AI, which had been delayed since June 2024 due to data privacy concerns raised by regulators. The company said EU users of Facebook and Instagram would receive notifications outlining how their data may be used, along with a link to opt out.
Meta clarified that while questions posed to its AI and public content from adult users may be used, private messages and data from under-18s would be excluded from training.
Instead of expanding quietly, the company is now making its plans public in an attempt to meet the EU’s transparency expectations.
The shift comes after Meta paused its original launch last year at the request of Ireland’s Data Protection Commission, which expressed concerns about using social media content for AI development. The move also drew criticism from advocacy group NOYB, which has urged regulators to intervene more decisively.
Meta joins a growing list of tech firms under scrutiny in Europe. Ireland’s privacy watchdog is already investigating Elon Musk’s X and Google for similar practices involving personal data use in AI model training.
Instead of treating such probes as isolated incidents, the EU appears to be setting a precedent that could reshape how global companies handle user data in AI development.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk’s X platform is under formal investigation by the Irish Data Protection Commission over its alleged use of public posts from EU users to train the Grok AI chatbot.
The probe is centred on whether X Internet Unlimited Company, the platform’s newly renamed Irish entity, has adhered to key GDPR principles while sharing publicly accessible data, like posts and interactions, with its affiliate xAI, which develops the chatbot.
Concerns have grown over the lack of explicit user consent, especially as other tech giants such as Meta signal similar data usage plans.
A move like this is part of a wider regulatory push in the EU to hold AI developers accountable instead of allowing unchecked experimentation. Experts note that many AI firms have deployed tools under a ‘build first, ask later’ mindset, an approach at odds with Europe’s strict data laws.
Should regulators conclude that public data still requires user consent, it could force a dramatic shift in how AI models are developed, not just in Europe but around the world.
Enterprises are now treading carefully. The investigation into X is already affecting AI adoption across the continent, with legal and reputational risks weighing heavily on decision-makers.
In one case, a Nordic bank halted its AI rollout midstream after its legal team couldn’t confirm whether European data had been used without proper disclosure. Instead of pushing ahead, the project was rebuilt using fully documented, EU-based training data.
The consequences could stretch far beyond the EU. Ireland’s probe might become a global benchmark for how governments view user consent in the age of data scraping and machine learning.
Instead of enforcement being region-specific, this investigation could inspire similar actions from regulators in places like Singapore and Canada. As AI continues to evolve, companies may have no choice but to adopt more transparent practices or face a rising tide of legal scrutiny.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta is expanding its ‘Teen Accounts’ feature to Facebook and Messenger following its initial launch on Instagram last September.
The rollout begins in the US, UK, Australia, and Canada, with plans to reach more countries soon.
These accounts are designed to give younger users an app experience with stronger safety measures, automatically activating restrictions to limit exposure to harmful content and interactions.
Teen users will be automatically placed in a more controlled environment that restricts who can message, comment, or tag them.
Only friends and previously contacted users can reach out via Messenger or see their stories, but tagging and mentions are also limited.
These settings require parental approval for any changes, and teens under 16 must have consent to alter key safety features.
On Instagram, Meta is introducing stricter safeguards. Users under 16 now need parental permission to go live or to turn off the tool that blurs images containing suspected nudity in direct messages.
Meta also implements reminders to limit screen time, prompting teens to log off after one hour and enabling overnight ‘Quiet mode’ to reduce late-night use.
The initiative follows increasing pressure on social media platforms to address concerns around teen mental health.
In recent years, US lawmakers and the Surgeon General have highlighted the risks associated with young users’ exposure to unregulated digital environments.
A study commissioned by Meta and Ipsos found that 94% of surveyed parents support Teen Accounts, with 85% saying the controls help ensure more positive online experiences for their children.
As digital safety continues to evolve as a priority, Meta’s expansion of Teen Accounts signals the willingness to build more accountable, youth-friendly online spaces across its platforms.
For more information on these topics, visit diplomacy.edu.
Meta has come under fire once again, this time over a new AI experiment on Instagram that suggests comments for users. Some users accused the company of using AI to inflate engagement metrics, potentially misleading advertisers and diminishing authentic user interaction.
The feature, spotted by test users, involves a pencil icon next to the comment bar on Instagram posts. Tapping it generates suggested replies based on the image’s content.
Meta has confirmed the feature is in testing but did not reveal plans for a broader launch. The company stated that it is exploring ways to incorporate Meta AI across different parts of its apps, including feeds, comments, groups, and search.
Public reaction has been largely negative, with concerns that AI-generated comments could flood the platform with inauthentic conversations. Social media users voiced fears of fake interactions replacing genuine ones, and some accused Meta of deceiving advertisers through inflated statistics.
Comparisons to dystopian scenarios were common, as users questioned the future of online social spaces.
This isn’t the first time Meta has faced backlash for its AI ventures. Previous attempts included AI personas modelled on celebrities and diverse identities, which were criticised for being disingenuous and engineered by largely homogenous development teams.
The future of AI-generated comments on Instagram remains uncertain as scrutiny continues to mount.
For more information on these topics, visit diplomacy.edu.