A cornerstone of the global cybersecurity ecosystem is facing an uncertain future. US government funding for MITRE Corporation to operate and maintain the Common Vulnerabilities and Exposures (CVE) program is set to expire, an unprecedented development that could significantly disrupt how security flaws are identified, tracked, and mitigated worldwide.
Launched in 1999, the CVE program has become the de facto international standard for cataloging publicly known software vulnerabilities. Managed by MITRE under sponsorship from the Department of Homeland Security (DHS) and the Cybersecurity and Infrastructure Security Agency (CISA), the program has published over 274,000 CVE records to date.
MITRE has warned that the lapse in funding will not only halt its ability to continue developing and modernizing the CVE system but could also impact related initiatives such as the Common Weakness Enumeration (CWE). These tools are essential for vulnerability classification, secure coding practices, and prioritisation of cybersecurity risks.
While Barsoum noted that the US government is working to find a resolution, the looming gap has already prompted independent action. Cybersecurity firm VulnCheck, which acts as a CVE Numbering Authority (CNA), has preemptively reserved 1,000 CVEs for 2025 in an effort to maintain continuity.
Industry experts warn the consequences could be far-reaching. Despite the challenges, MITRE has affirmed its commitment to the CVE program and its role as a global resource. However, unless a new funding arrangement is secured, the future of this foundational infrastructure remains in question.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has revealed it suspended 39.2 million advertiser accounts in 2024, more than triple the number from the previous year, as part of its latest push to combat ad fraud.
The tech giant said it is now able to block most bad actors before they even run an advert, thanks to advanced large language models and detection signals such as fake business details and fraudulent payments.
Instead of relying solely on AI, a team of over 100 experts from across Google and DeepMind also reviews deepfake scams and develops targeted countermeasures.
The company rolled out more than 50 LLM-based safety updates last year and introduced over 30 changes to advertising and publishing policies. These efforts, alongside other technical reinforcements, led to a 90% drop in reports of deepfake ads.
While the US saw the highest number of suspensions, with all 39.2 million accounts coming from there alone, India followed with 2.9 million accounts taken down. In both countries, ads were removed for violations such as trademark abuse, misleading personalisation, and financial service scams.
Overall, Google blocked 5.1 billion ads globally and restricted another 9.1 billion, instead of allowing harmful content to spread unchecked. Nearly half a billion of those removed were linked specifically to scam activity.
In a year when half the global population headed to the polls, Google also verified over 8,900 election advertisers and took down 10.7 million political ads.
While the scale of suspensions may raise concerns about fairness, Google said human reviews are included in the appeals process.
The company acknowledged previous confusion over enforcement clarity and is now updating its messaging to ensure advertisers understand the reasons behind account actions more clearly.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Anthropic has introduced a new integration that allows its AI chatbot, Claude, to connect directly with Google Workspace.
The feature, now in beta for premium subscribers, enables Claude to reference content from Gmail, Google Calendar, and Google Docs to deliver more personalised and context-aware responses.
Users can expect in-line citations showing where specific information originated from within their Google account.
This integration is available for subscribers on the Max, Team, Enterprise, and Pro plans, though multi-user accounts require administrator approval.
While Claude can read emails and review documents, it cannot send emails or schedule events. Anthropic insists the system uses strict access controls and does not train its models on user data by default.
The update arrives as part of Anthropic’s broader efforts to enhance Claude’s appeal in a competitive AI landscape.
Alongside the Workspace integration, the company launched Claude Research, a tool that performs real-time web searches to provide fast, in-depth answers.
Although still smaller than ChatGPT’s user base, Claude is steadily growing, reaching 3.3 million web users in March 2025.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Between 7 and 11 April, representatives from 20 allied governments and national agencies participated in a NATO-led exercise designed to strengthen mutual support in the cyber domain.
The activity aimed to improve coordination and collective response mechanisms for cyber incidents affecting critical national infrastructure. Through simulated threat scenarios, participants practised real-time information exchange, joint decision-making, and coordinated response planning.
According to NATO, cyber activities targeting critical infrastructure, industrial control systems, and public sector services have increased in frequency.
Such activities are considered to serve various objectives, including information gathering and operational disruption.
The role of cyber operations in modern conflict gained increased attention following Russia’s actions in Ukraine in 2022, where cyber activity was observed alongside traditional military operations.
Hosted by Czechia, the exercise served to test NATO’s Virtual Cyber Incident Support Capability (VCISC), a coordination platform introduced at the 2023 Vilnius Summit.
VCISC enables nations to request and receive cyber assistance from designated counterparts across the Alliance.
The support offered includes services such as malware analysis, cyber threat intelligence, and digital forensics. However, the initiative is voluntary, with allies contributing national resources and expertise to mitigate the impact of significant cyber incidents and support recovery.
Separately, in January 2025, the US officials met with her Nordic-Baltic counterparts from Denmark, Estonia, Finland, Iceland, Latvia, Lithuania, Norway, and Sweden.
Discussions centred on enhancing regional cooperation to safeguard undersea cable infrastructure—critical to communications and energy systems. Participants noted the broadening spectrum of threats to these assets.
In parallel, NATO launched the Baltic Sentry to reinforce the protection of critical infrastructure in the Baltic Sea region. The initiative is intended to bolster NATO’s posture and improve its capacity to respond promptly to destabilising activities.
In July 2024, NATO also announced the expansion of the role of its Integrated Cyber Defence Centre (NICC).
The Centre is tasked with enhancing the protection of NATO and allied networks, as well as supporting the operational use of cyberspace. It provides commanders with insights into potential cyber threats and vulnerabilities, including those related to civilian infrastructure essential to military operations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Hertz has disclosed a significant data breach involving sensitive customer information, including credit card and driver’s licence details, following a cyberattack on one of its service providers.
The breach stemmed from vulnerabilities in the Cleo Communications file transfer platform, exploited in October and December 2024.
Hertz confirmed the unauthorised access on 10 February, with further investigations revealing a range of exposed data, including names, birth dates, contact details, and in some cases, Social Security and passport numbers.
While the company has not confirmed how many individuals were affected, notifications have been issued in the US, UK, Canada, Australia, and across the EU.
Hertz stressed that no misuse of customer data has been identified so far, and that the breach has been reported to law enforcement and regulators. Cleo has since patched the exploited vulnerabilities.
The identity of the attackers remains unknown. However, Cleo was previously targeted in a broader cyber campaign last October, with the Clop ransomware group later claiming responsibility.
The gang published Cleo’s company data online and listed dozens of breached organisations, suggesting the incident was part of a wider, coordinated effort.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A hacker has exploited decentralised exchange KiloEX, draining approximately US$7.5 million by manipulating its price oracle mechanism. The breach led to an immediate suspension of the platform and sparked a cross-industry investigation involving cybersecurity firms and blockchain networks.
The vulnerability centred on KiloEX’s price feed system, which allowed the attacker to manipulate the ETH/USD feed by inputting an artificial entry price of 100 and closing it at 10,000.
According to cybersecurity firm PeckShield, this simple flaw enabled the attacker to steal millions across multiple chains, including $3.3 million from Base, $3.1 million from opBNB, and $1 million from BNB Smart Chain.
KiloEX is working with various security experts and blockchain networks such as BNB Chain and Manta Network to recover the stolen assets.
Funds are reportedly being routed through cross-chain protocols like zkBridge and Meson. Co-founder of Fuzzland, Chaofan Shou, described the breach as stemming from a ‘very simple vulnerability’ in oracle verification, where only intermediaries were validated rather than the original transaction sender.
The attack caused KiloEX’s token price to plummet by over 29% and came just one day after the platform announced a strategic partnership with DWF Labs, aimed at fuelling growth. KiloEX has promised a full incident report and a bounty programme to encourage asset recovery.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The individuals, identified by Harbin police as Katheryn A. Wilson, Robert J. Snelling, and Stephen W. Johnson, are said to have worked through the US National Security Agency (NSA).
The attacks reportedly targeted systems critical to the Games’ operations, including athlete registration, travel, and competition management, which held sensitive personal data.
Chinese state media further claimed that the cyber intrusions extended beyond the sporting event, affecting key infrastructure in Heilongjiang province. Targets allegedly included energy, transport, water, telecoms, defence research institutions, and technology giant Huawei.
Authorities said the NSA used encrypted data to compromise Microsoft Windows systems in the region, with the aim of disrupting services and undermining national security.
The Foreign Ministry of China denounced the alleged cyberattacks as ‘extremely malicious,’ urging the United States to halt what it called repeated intrusions and misinformation.
The UD Embassy in Beijing has yet to respond, and the allegations come amid ongoing tensions, with both nations frequently accusing each other of state-backed hacking.
Only last month, the US government named and charged 12 Chinese nationals in connection with cyberespionage efforts against American interests.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
From GPT-4 to 4.5: What has changed and why it matters
In March 2024, OpenAI released GPT-4.5, the latest iteration in its series of large language models (LLMs), pushing the boundaries of what machines can do with language understanding and generation. Building on the strengths of GPT-4, its successor, GPT-4.5, demonstrates improved reasoning capabilities, a more nuanced understanding of context, and smoother, more human-like interactions.
What sets GPT-4.5 apart from its predecessors is that it showcases refined alignment techniques, better memory over longer conversations, and increased control over tone, persona, and factual accuracy. Its ability to maintain coherent, emotionally resonant exchanges over extended dialogue marks a turning point in human-AI communication. These improvements are not just technical — they significantly affect the way we work, communicate, and relate to intelligent systems.
The increasing ability of GPT-4.5 to mimic human behaviour has raised a key question: Can it really fool us into thinking it is one of us? That question has recently been answered — and it has everything to do with the Turing Test.
The Turing Test: Origins, purpose, and modern relevance
In 1950, British mathematician and computer scientist Alan Turing posed a provocative question: ‘Can machines think?’ In his seminal paper ‘Computing Machinery and Intelligence,’ he proposed what would later become known as the Turing Test — a practical way of evaluating a machine’s ability to exhibit intelligent behaviour indistinguishable from that of a human.
In its simplest form, if a human evaluator cannot reliably distinguish between a human’s and a machine’s responses during a conversation, the machine is said to have passed the test. For decades, the Turing Test remained more of a philosophical benchmark than a practical one.
Early chatbots like ELIZA in the 1960s created the illusion of intelligence, but their scripted and shallow interactions fell far short of genuine human-like communication. Many researchers have questioned the test’s relevance as AI progressed, arguing that mimicking conversation is not the same as true understanding or consciousness.
Despite these criticisms, the Turing Test has endured — not as a definitive measure of machine intelligence, but rather as a cultural milestone and public barometer of AI progress. Today, the test has regained prominence with the emergence of models like GPT-4.5, which can hold complex, context-aware, emotionally intelligent conversations. What once seemed like a distant hypothetical is now an active, measurable challenge that GPT-4.5 has, by many accounts, overcome.
How GPT-4.5 fooled the judges: Inside the Turing Test study
In early 2025, a groundbreaking study conducted by researchers at the University of California, San Diego, provided the most substantial evidence yet that an AI could pass the Turing Test. In a controlled experiment involving over 500 participants, multiple conversational agents—including GPT-4.5, Meta’s LLaMa-3.1, and the classic chatbot ELIZA—were evaluated in blind text-based conversations. The participants were tasked with identifying whether they spoke to a human or a machine.
The results were astonishing: GPT-4.5 was judged to be human in 54% to 73% of interactions, depending on the scenario, surpassing the baseline for passing the Turing Test. In some cases, it outperformed actual human participants—who were correctly identified as human only 67% of the time.
That experiment marked the first time a contemporary AI model convincingly passed the Turing Test under rigorous scientific conditions. The study not only demonstrated the model’s technical capabilities—it also raised philosophical and ethical questions.
What does it mean for a machine to be ‘indistinguishable’ from a human? And more importantly, how should society respond to a world where AI can convincingly impersonate us?
Measuring up: GPT-4.5 vs LLaMa-3.1 and ELIZA
While GPT-4.5’s performance in the Turing Test has garnered much attention, its comparison with other models puts things into a clearer perspective. Meta’s LLaMa-3.1, a powerful and widely respected open-source model, also participated in the study.
It was identified as human in approximately 56% of interactions — a strong showing, although it fell just short of the commonly accepted benchmark to define a Turing Test pass. The result highlights how subtle conversational nuance and coherence differences can significantly influence perception.
The study also revisited ELIZA, the pioneering chatbot from the 1960s designed to mimic a psychotherapist. While historically significant, ELIZA’s simplistic, rule-based structure resulted in it being identified as non-human in most cases — around 77%. That stark contrast with modern models demonstrates how far natural language processing has progressed over the past six decades.
The comparative results underscore an important point: success in human-AI interaction today depends on language generation and the ability to adapt the tone, context, and emotional resonance. GPT-4.5’s edge seems to come not from mere fluency but from its ability to emulate the subtle cues of human reasoning and expression — a quality that left many test participants second-guessing whether they were even talking to a machine.
The power of persona: How character shaped perception
One of the most intriguing aspects of the UC San Diego study was how assigning specific personas to AI models significantly influenced participants’ perceptions. When GPT-4.5 was framed as an introverted, geeky 19-year-old college student, it consistently scored higher in being perceived as human than when it had no defined personality.
The seemingly small narrative detail was a powerful psychological cue that shaped how people interpreted its responses. The use of persona added a layer of realism to the conversation.
Slight awkwardness, informal phrasing, or quirky responses were not seen as flaws — they were consistent with the character. Participants were more likely to forgive or overlook certain imperfections if those quirks aligned with the model’s ‘personality’.
That finding reveals how intertwined identity and believability are in human communication, even when the identity is entirely artificial. The strategy also echoes something long known in storytelling and branding: people respond to characters, not just content.
In the context of AI, persona functions as a kind of narrative camouflage — not necessarily to deceive, but to disarm. It helps bridge the uncanny valley by offering users a familiar social framework. And as AI continues to evolve, it is clear that shaping how a model is perceived may be just as important as what the model is actually saying.
Limitations of the Turing Test: Beyond the illusion of intelligence
While passing the Turing Test has long been viewed as a milestone in AI, many experts argue that it is not the definitive measure of machine intelligence. The test focuses on imitation — whether an AI can appear human in conversation — rather than on genuine understanding, reasoning, or consciousness. In that sense, it is more about performance than true cognitive capability.
Critics point out that large language models like GPT-4.5 do not ‘understand’ language in the human sense – they generate text by predicting the most statistically probable next word based on patterns in massive datasets. That allows them to generate impressively coherent responses, but it does not equate to comprehension, self-awareness, or independent thought.
No matter how convincing, the illusion of intelligence is still an illusion — and mistaking it for something more can lead to misplaced trust or overreliance. Despite its symbolic power, the Turing Test was never meant to be the final word on AI.
As AI systems grow increasingly sophisticated, new benchmarks are needed — ones that assess linguistic mimicry, reasoning, ethical decision-making, and robustness in real-world environments. Passing the Turing Test may grab headlines, but the real test of intelligence lies far beyond the ability to talk like us.
Wider implications: Rethinking the role of AI in society
GPT-4.5’s success in the Turing Test does not just mark a technical achievement — it forces us to confront deeper societal questions. If AI can convincingly pass as a human in open conversation, what does that mean for trust, communication, and authenticity in our digital lives?
From customer service bots to AI-generated news anchors, the line between human and machine is blurring — and the implications are far from purely academic. These developments are challenging existing norms in areas such as journalism, education, healthcare, and even online dating.
How do we ensure transparency when AI is involved? Should AI be required to disclose its identity in every interaction? And how do we guard against malicious uses — such as deepfake conversations or synthetic personas designed to manipulate, mislead, or exploit?
On a broader level, the emergence of human-sounding AI invites a rethinking of agency and responsibility. If a machine can persuade, sympathise, or influence like a person — who is accountable when things go wrong?
As AI becomes more integrated into the human experience, society must evolve its frameworks not only for regulation and ethics but also for cultural adaptation. GPT-4.5 may have passed the Turing Test, but the test for us, as a society, is just beginning.
What comes next: Human-machine dialogue in the post-Turing era
With GPT-4.5 crossing the Turing threshold, we are no longer asking whether machines can talk like us — we are now asking what that means for how we speak, think, and relate to machines. That moment represents a paradigm shift: from testing the machine’s ability to imitate humans to understanding how humans will adapt to coexist with machines that no longer feel entirely artificial.
Future AI models will likely push this boundary even further — engaging in conversations that are not only coherent but also deeply contextual, emotionally attuned, and morally responsive. The bar for what feels ‘human’ in digital interaction is rising rapidly, and with it comes the need for new social norms, protocols, and perhaps even new literacies.
We will need to learn not only how to talk to machines but how to live with them — as collaborators, counterparts, and, in some cases, as reflections of ourselves. In the post-Turing era, the test is no longer whether machines can fool us — it is whether we can maintain clarity, responsibility, and humanity in a world where the artificial feels increasingly real.
GPT-4.5 may have passed a historic milestone, but the real story is just beginning — not one of machines becoming human, but of humans redefining what it means to be ourselves in dialogue with them.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Nvidia is shifting its AI supercomputer manufacturing operations to the United States for the first time, instead of relying on a globally dispersed supply chain.
In partnership with industry giants such as TSMC, Foxconn, and Wistron, the company is establishing large-scale facilities to produce its advanced Blackwell chips in Arizona and complete supercomputers in Texas. Production is expected to reach full scale within 12 to 15 months.
Over a million square feet of manufacturing space has been commissioned, with key roles also played by packaging and testing firms Amkor and SPIL.
The move reflects Nvidia’s ambition to create up to half a trillion dollars in AI infrastructure within the next four years, while boosting supply chain resilience and growing its US-based operations instead of expanding solely abroad.
These AI supercomputers are designed to power new, highly specialised data centres known as ‘AI factories,’ capable of handling vast AI workloads.
Nvidia’s investment is expected to support the construction of dozens of such facilities, generating hundreds of thousands of jobs and securing long-term economic value.
To enhance efficiency, Nvidia will apply its own AI, robotics, and simulation tools across these projects, using Omniverse to model factory operations virtually and Isaac GR00T to develop robots that automate production.
According to CEO Jensen Huang, bringing manufacturing home strengthens supply chains and better positions the company to meet the surging global demand for AI computing power.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US President Donald Trump is preparing to introduce new tariffs on semiconductor imports, aiming to shift more chip production back to the United States.
Semiconductors, or microchips, are essential components in everything from smartphones and laptops to medical devices and renewable energy systems.
Speaking aboard Air Force One, Trump said new tariff rates would be announced soon as part of a broader effort to end American reliance on foreign-made chips and strengthen national security.
The global semiconductor supply chain is heavily concentrated in Asia, with Taiwan’s TSMC producing over half of the world’s chips and supplying major companies like Apple, Microsoft, and Nvidia.
Trump’s move signals a more aggressive stance in the ongoing ‘chip wars’ with China, as his administration warns of the dangers of the US being dependent on overseas production for such a critical technology.
Although the US has already taken steps to boost domestic chip production—like the $6.6 billion awarded to TSMC to build a factory in Arizona—progress has been slow due to a shortage of skilled workers.
The plant faced delays, and TSMC ultimately flew in thousands of workers from Taiwan to meet demands, underscoring the challenge of building a self-reliant semiconductor industry on American soil.
Why does it matter?
Trump’s proposed tariffs are expected to form part of a wider investigation into the electronics supply chain, aimed at shielding the US from foreign control and ensuring long-term technological independence. As markets await the announcement, the global tech industry is bracing for potential disruptions and new tensions in the international trade landscape.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!