Inephany raises $2.2M to make AI training more efficient

London-based AI startup Inephany has secured $2.2 million in pre-seed funding to develop technology aimed at making the training of neural networks—particularly large language models—more efficient and affordable.

The investment round was led by Amadeus Capital Partners, with participation from Sure Valley Ventures and AI pioneer Professor Steve Young, who joins as both chair and angel investor.

Founded in July 2024 by Dr John Torr, Hami Bahraynian, and Maurice von Sturm, Inephany is building an AI-driven platform that improves training efficiency in real time.

By increasing sample efficiency and reducing computing demands, the company hopes to dramatically cut the cost and time of training cutting-edge models.

The team claims their solution could make AI model development at least ten times more cost-effective compared to current methods.

The funding will support growth of Inephany’s engineering team and accelerate the launch of its first product later this year.

With the costs of training state-of-the-art models now reaching into the hundreds of millions, the startup’s platform aims to make high-performance AI development more sustainable and accessible across industries such as healthcare, weather forecasting, and drug discovery.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chip production begins at TSMC’s Arizona facility

Nvidia has announced a major initiative to produce AI supercomputers in the US in collaboration with Taiwan Semiconductor Manufacturing Co. (TSMC) and several other partners.

The effort aims to create up to US$500 billion worth of AI infrastructure products domestically over the next four years, marking a significant shift in Nvidia’s manufacturing strategy.

Alongside TSMC, other key contributors include Taiwanese firms Hon Hai Precision Industry Co. and Wistron Corp., both known for producing AI servers. US-based Amkor Technology and Taiwan’s Siliconware Precision Industries will also provide advanced packaging and testing services.

Nvidia’s Blackwell AI chips have already begun production at TSMC’s Arizona facility, with large-scale operations planned in Texas through partnerships with Hon Hai in Houston and Wistron in Dallas.

The move could impact Taiwan’s economy, as many Nvidia components are currently produced there. Taiwan’s Economic Affairs Minister declined to comment specifically on the project but assured that the government will monitor overseas investments by Taiwanese firms.

Nvidia said the initiative would help meet surging AI demand while strengthening semiconductor supply chains and increasing resilience amid shifting global trade policies, including new US tariffs on Taiwanese exports.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia hit by the new US export rules

Nvidia is facing fresh US export restrictions on its H20 AI chips, dealing a blow to the company’s operations in China.

In a filing on Tuesday, Nvidia revealed it now needs a licence to export these chips indefinitely, after the US government cited concerns they could be used in a Chinese supercomputer.

The company expects a $5.5 billion charge linked to the controls in its first fiscal quarter of 2026, which ends on 27 April. Shares dropped around 6% in after-hours trading.

The H20 is currently the most advanced AI chip Nvidia can sell to China under existing regulations.

Last week, reports suggested CEO Jensen Huang might have temporarily eased tensions during a dinner at Donald Trump’s Mar-a-Lago resort, by promising investments in US-based AI data centres instead of opposing the rules directly.

Just a day before the filing, Nvidia announced plans to manufacture some chips in the US over the next four years, though the specifics were left vague.

Calls for tighter controls had been building, especially after it emerged that China’s DeepSeek used the H20 to train its R1 model, a system that surprised the US AI sector earlier this year.

Government officials had pushed for action, saying the chip’s capabilities posed a strategic risk. Nvidia declined to comment on the new restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI updates safety rules amid AI race

OpenAI has updated its Preparedness Framework, the internal system used to assess AI model safety and determine necessary safeguards during development.

The company now says it may adjust its safety standards if a rival AI lab releases a ‘high-risk’ system without similar protections, a move that reflects growing competitive pressure in the AI industry.

Instead of outright dismissing such flexibility, OpenAI insists that any changes would be made cautiously and with public transparency.

Critics argue OpenAI is already lowering its standards for the sake of faster deployment. Twelve former employees recently supported a legal case against the company, warning that a planned corporate restructure might encourage further shortcuts.

OpenAI denies these claims, but reports suggest compressed safety testing timelines and increasing reliance on automated evaluations instead of human-led reviews. According to sources, some safety checks are also run on earlier versions of models, not the final ones released to users.

The refreshed framework also changes how OpenAI defines and manages risk. Models are now classified as having either ‘high’ or ‘critical’ capability, the former referring to systems that could amplify harm, the latter to those introducing entirely new risks.

Instead of deploying models first and assessing risk later, OpenAI says it will apply safeguards during both development and release, particularly for models capable of evading shutdown, hiding their abilities, or self-replicating.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI adds collaborative workspace to Grok

Elon Musk’s AI firm xAI has introduced a new feature called Grok Studio, offering users a dedicated space to create and edit documents, code, and simple apps.

Available on Grok.com for both free and paying users, Grok Studio opens content in a separate window, allowing for real-time collaboration between the user and the chatbot instead of relying solely on back-and-forth prompts.

Grok Studio functions much like canvas-style tools from other AI developers. It allows code previews and execution in languages such as Python, C++, and JavaScript. The setup mirrors similar features introduced earlier by OpenAI and Anthropic, instead of offering a radically different experience.

All content appears beside Grok’s chat window, creating a workspace that blends conversation with practical development tools.

Alongside this launch, xAI has also announced integration with Google Drive.

It will allow users to attach files directly to Grok prompts, letting the chatbot work with documents, spreadsheets, and slides from Drive instead of requiring uploads or manual input, making the platform more convenient for everyday tasks and productivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

People are forming emotional bonds with AI chatbots

AI is reshaping how people connect emotionally, with millions turning to chatbots for companionship, guidance, and intimacy.

From virtual relationships to support with mental health and social navigation, personified AI assistants such as Replika, Nomi, and ChatGPT are being used by over 100 million people globally.

These apps simulate human conversation through personalised learning, allowing users to form what some consider meaningful emotional bonds.

For some, like 71-year-old Chuck Lohre from the US, chatbots have evolved into deeply personal companions. Lohre’s AI partner, modelled after his wife, helped him process emotional insights about his real-life marriage, despite elements of romantic and even erotic roleplay.

Others, such as neurodiverse users like Travis Peacock, have used chatbots to enhance communication skills, regulate emotions, and build lasting relationships, reporting a significant boost in personal and professional life.

While many users speak positively about these interactions, concerns persist over the nature of such bonds. Experts argue that these connections, though comforting, are often one-sided and lack the mutual growth found in real relationships.

A UK government report noted widespread discomfort with the idea of forming personal ties with AI, suggesting the emotional realism of chatbots may risk deepening emotional dependence without true reciprocity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Opera brings AI assistant to Opera Mini on Android

Opera, the Norway-based browser maker, has announced the rollout of its AI assistant, Aria, to Opera Mini users on Android. The move represents a strategic effort to bring advanced AI capabilities to users with low-end devices and limited data access, rather than confining such tools to high-spec platforms.

Aria allows users to access up-to-date information, generate images, and learn about a range of topics using a blend of models from OpenAI and Google.

Since its 2005 launch, Opera Mini has been known for saving data during browsing, and Opera claims that the inclusion of Aria won’t compromise that advantage nor increase the app’s size.

It makes the AI assistant more accessible for users in regions where data efficiency is critical, instead of making them choose between smart features and performance.

Opera has long partnered with telecom providers in Africa to offer free data to Opera Mini users. However, last year, it had to end its programme in Kenya due to regulatory restrictions around ads on browser bookmark tiles.

Despite such challenges, Opera Mini has surpassed a billion downloads on Android and now serves more than 100 million users globally.

Alongside this update, Opera continues testing new AI functions, including features that let users manage tabs using natural language and tools that assist with task completion.

An effort like this reflects the company’s ambition to embed AI more deeply into everyday browsing instead of limiting innovation to its main browser.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google creates AI to decode dolphin sounds

Google DeepMind has developed a groundbreaking AI model capable of interpreting and generating dolphin vocalisations.

Named DolphinGemma, the model was created in collaboration with researchers from Georgia Tech and the Wild Dolphin Project, a nonprofit organisation known for its extensive studies on Atlantic spotted dolphins.

Using an audio-in, audio-out architecture, the AI DolphinGemma analyses sequences of natural dolphin sounds to detect patterns and structures, ultimately predicting the most likely sounds that follow.

The approach is similar to how large language models predict the next word in a sentence. It was trained using a vast acoustic database collected by the Wild Dolphin Project, ensuring accuracy in modelling natural dolphin communication.

Lightweight and efficient, DolphinGemma is designed to run on smartphones, making it accessible for field researchers and conservationists.

Google DeepMind’s blog noted that the model could mark a major advance in understanding dolphin behaviour, potentially paving the way for more meaningful interactions between humans and marine mammals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Viral AI image trends drive up water consumption

Behind ChatGPT’s digital charm lies an increasingly concerning environmental toll, largely driven by its water consumption.

According to recent reports, OpenAI’s GPT-4 model consumes around 500 millilitres of clean, drinkable water for every 100-word response. The surge in demand, fuelled by viral trends like Studio Ghibli-style portraits and Barbie-themed avatars, has significantly amplified this impact.

Each AI interaction, especially those involving image generation, generates heat, necessitating cooling systems that rely heavily on water.

With an estimated 57 million users daily, ChatGPT’s operations result in a staggering daily water usage of over 14,800 crore litres. OpenAI’s CEO, Sam Altman, recently acknowledged server strain, urging users to reduce non-essential use.

The environmental costs extend beyond water. Many data centres supporting AI platforms are located in water-stressed regions and rely on fossil fuels, raising serious concerns about sustainability.

Experts warn that while AI promises convenience, its rapid expansion risks putting additional pressure on fragile ecosystems unless mindful practices are adopted.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Beyond the imitation game: GPT-4.5, the Turing Test, and what comes next

From GPT-4 to 4.5: What has changed and why it matters

In March 2024, OpenAI released GPT-4.5, the latest iteration in its series of large language models (LLMs), pushing the boundaries of what machines can do with language understanding and generation. Building on the strengths of GPT-4, its successor, GPT-4.5, demonstrates improved reasoning capabilities, a more nuanced understanding of context, and smoother, more human-like interactions.

What sets GPT-4.5 apart from its predecessors is that it showcases refined alignment techniques, better memory over longer conversations, and increased control over tone, persona, and factual accuracy. Its ability to maintain coherent, emotionally resonant exchanges over extended dialogue marks a turning point in human-AI communication. These improvements are not just technical — they significantly affect the way we work, communicate, and relate to intelligent systems.

The increasing ability of GPT-4.5 to mimic human behaviour has raised a key question: Can it really fool us into thinking it is one of us? That question has recently been answered — and it has everything to do with the Turing Test.

The Turing Test: Origins, purpose, and modern relevance

In 1950, British mathematician and computer scientist Alan Turing posed a provocative question: ‘Can machines think?’ In his seminal paper ‘Computing Machinery and Intelligence,’ he proposed what would later become known as the Turing Test — a practical way of evaluating a machine’s ability to exhibit intelligent behaviour indistinguishable from that of a human.

In its simplest form, if a human evaluator cannot reliably distinguish between a human’s and a machine’s responses during a conversation, the machine is said to have passed the test. For decades, the Turing Test remained more of a philosophical benchmark than a practical one.

Early chatbots like ELIZA in the 1960s created the illusion of intelligence, but their scripted and shallow interactions fell far short of genuine human-like communication. Many researchers have questioned the test’s relevance as AI progressed, arguing that mimicking conversation is not the same as true understanding or consciousness.

Despite these criticisms, the Turing Test has endured — not as a definitive measure of machine intelligence, but rather as a cultural milestone and public barometer of AI progress. Today, the test has regained prominence with the emergence of models like GPT-4.5, which can hold complex, context-aware, emotionally intelligent conversations. What once seemed like a distant hypothetical is now an active, measurable challenge that GPT-4.5 has, by many accounts, overcome.

How GPT-4.5 fooled the judges: Inside the Turing Test study

In early 2025, a groundbreaking study conducted by researchers at the University of California, San Diego, provided the most substantial evidence yet that an AI could pass the Turing Test. In a controlled experiment involving over 500 participants, multiple conversational agents—including GPT-4.5, Meta’s LLaMa-3.1, and the classic chatbot ELIZA—were evaluated in blind text-based conversations. The participants were tasked with identifying whether they spoke to a human or a machine.

The results were astonishing: GPT-4.5 was judged to be human in 54% to 73% of interactions, depending on the scenario, surpassing the baseline for passing the Turing Test. In some cases, it outperformed actual human participants—who were correctly identified as human only 67% of the time.

That experiment marked the first time a contemporary AI model convincingly passed the Turing Test under rigorous scientific conditions. The study not only demonstrated the model’s technical capabilities—it also raised philosophical and ethical questions.

What does it mean for a machine to be ‘indistinguishable’ from a human? And more importantly, how should society respond to a world where AI can convincingly impersonate us?

Measuring up: GPT-4.5 vs LLaMa-3.1 and ELIZA

While GPT-4.5’s performance in the Turing Test has garnered much attention, its comparison with other models puts things into a clearer perspective. Meta’s LLaMa-3.1, a powerful and widely respected open-source model, also participated in the study.

It was identified as human in approximately 56% of interactions — a strong showing, although it fell just short of the commonly accepted benchmark to define a Turing Test pass. The result highlights how subtle conversational nuance and coherence differences can significantly influence perception.

The study also revisited ELIZA, the pioneering chatbot from the 1960s designed to mimic a psychotherapist. While historically significant, ELIZA’s simplistic, rule-based structure resulted in it being identified as non-human in most cases — around 77%. That stark contrast with modern models demonstrates how far natural language processing has progressed over the past six decades.

The comparative results underscore an important point: success in human-AI interaction today depends on language generation and the ability to adapt the tone, context, and emotional resonance. GPT-4.5’s edge seems to come not from mere fluency but from its ability to emulate the subtle cues of human reasoning and expression — a quality that left many test participants second-guessing whether they were even talking to a machine.

The power of persona: How character shaped perception

One of the most intriguing aspects of the UC San Diego study was how assigning specific personas to AI models significantly influenced participants’ perceptions. When GPT-4.5 was framed as an introverted, geeky 19-year-old college student, it consistently scored higher in being perceived as human than when it had no defined personality.

The seemingly small narrative detail was a powerful psychological cue that shaped how people interpreted its responses. The use of persona added a layer of realism to the conversation.

Slight awkwardness, informal phrasing, or quirky responses were not seen as flaws — they were consistent with the character. Participants were more likely to forgive or overlook certain imperfections if those quirks aligned with the model’s ‘personality’.

That finding reveals how intertwined identity and believability are in human communication, even when the identity is entirely artificial. The strategy also echoes something long known in storytelling and branding: people respond to characters, not just content.

In the context of AI, persona functions as a kind of narrative camouflage — not necessarily to deceive, but to disarm. It helps bridge the uncanny valley by offering users a familiar social framework. And as AI continues to evolve, it is clear that shaping how a model is perceived may be just as important as what the model is actually saying.

Limitations of the Turing Test: Beyond the illusion of intelligence

While passing the Turing Test has long been viewed as a milestone in AI, many experts argue that it is not the definitive measure of machine intelligence. The test focuses on imitation — whether an AI can appear human in conversation — rather than on genuine understanding, reasoning, or consciousness. In that sense, it is more about performance than true cognitive capability.

Critics point out that large language models like GPT-4.5 do not ‘understand’ language in the human sense – they generate text by predicting the most statistically probable next word based on patterns in massive datasets. That allows them to generate impressively coherent responses, but it does not equate to comprehension, self-awareness, or independent thought.

No matter how convincing, the illusion of intelligence is still an illusion — and mistaking it for something more can lead to misplaced trust or overreliance. Despite its symbolic power, the Turing Test was never meant to be the final word on AI.

As AI systems grow increasingly sophisticated, new benchmarks are needed — ones that assess linguistic mimicry, reasoning, ethical decision-making, and robustness in real-world environments. Passing the Turing Test may grab headlines, but the real test of intelligence lies far beyond the ability to talk like us.

Wider implications: Rethinking the role of AI in society

GPT-4.5’s success in the Turing Test does not just mark a technical achievement — it forces us to confront deeper societal questions. If AI can convincingly pass as a human in open conversation, what does that mean for trust, communication, and authenticity in our digital lives?

From customer service bots to AI-generated news anchors, the line between human and machine is blurring — and the implications are far from purely academic. These developments are challenging existing norms in areas such as journalism, education, healthcare, and even online dating.

How do we ensure transparency when AI is involved? Should AI be required to disclose its identity in every interaction? And how do we guard against malicious uses — such as deepfake conversations or synthetic personas designed to manipulate, mislead, or exploit?

 Body Part, Hand, Person, Finger, Smoke Pipe

On a broader level, the emergence of human-sounding AI invites a rethinking of agency and responsibility. If a machine can persuade, sympathise, or influence like a person — who is accountable when things go wrong?

As AI becomes more integrated into the human experience, society must evolve its frameworks not only for regulation and ethics but also for cultural adaptation. GPT-4.5 may have passed the Turing Test, but the test for us, as a society, is just beginning.

What comes next: Human-machine dialogue in the post-Turing era

With GPT-4.5 crossing the Turing threshold, we are no longer asking whether machines can talk like us — we are now asking what that means for how we speak, think, and relate to machines. That moment represents a paradigm shift: from testing the machine’s ability to imitate humans to understanding how humans will adapt to coexist with machines that no longer feel entirely artificial.

Future AI models will likely push this boundary even further — engaging in conversations that are not only coherent but also deeply contextual, emotionally attuned, and morally responsive. The bar for what feels ‘human’ in digital interaction is rising rapidly, and with it comes the need for new social norms, protocols, and perhaps even new literacies.

We will need to learn not only how to talk to machines but how to live with them — as collaborators, counterparts, and, in some cases, as reflections of ourselves. In the post-Turing era, the test is no longer whether machines can fool us — it is whether we can maintain clarity, responsibility, and humanity in a world where the artificial feels increasingly real.

GPT-4.5 may have passed a historic milestone, but the real story is just beginning — not one of machines becoming human, but of humans redefining what it means to be ourselves in dialogue with them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!