How digital twins are being weaponised in crypto scams

Digital twins are virtual models of real-world objects, systems, or processes. They enable real-time simulations, monitoring, and predictions, helping industries like healthcare and manufacturing optimise resources. In the crypto world, cybercriminals have found a way to exploit this technology for fraudulent activities.

Scammers create synthetic identities by gathering personal data from various sources. These digital twins are used to impersonate influencers or executives, promoting fake investment schemes or stealing funds. The unregulated nature of crypto platforms makes it easier for criminals to exploit users.

Real-world scams are already happening. Deepfake CEO videos have tricked executives into transferring funds under false pretences. Counterfeit crypto platforms have also stolen sensitive information from users. These scams highlight the risks of AI-powered digital twins in the crypto space.

Blockchain offers solutions to combat these frauds. Decentralised identities (DID) and NFT identity markers can verify interactions. Blockchain’s immutable audit trails and smart contracts can help secure transactions and protect users from digital twin scams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Chefs quietly embrace AI in the kitchen

At this year’s Michelin Guide awards in France, AI sparked nearly as much conversation as the stars themselves.

Paris-based chef Matan Zaken, of the one-star restaurant Nhome, said AI dominated discussions among chefs, even though many are hesitant to admit they already rely on tools like ChatGPT for inspiration and recipe development.

Zaken openly embraces AI in his kitchen, using platforms like ChatGPT Premium to generate ingredient pairings—such as peanuts and wild garlic—that he might not have considered otherwise. Instead of starting with traditional tastings, he now consults vast databases of food imagery and chemical profiles.

In a recent collaboration with the digital collective Obvious Art, AI-generated food photos came first, and Zaken created dishes to match them.

Still, not everyone is sold on AI’s place in haute cuisine. Some top chefs insist that no algorithm can replace the human palate or creativity honed by years of training.

Philippe Etchebest, who just earned a second Michelin star, argued that while AI may be helpful elsewhere, it has no place in the artistry of the kitchen. Others worry it strays too far from the culinary traditions rooted in local produce and craftsmanship.

Many chefs, however, seem more open to using AI behind the scenes. From managing kitchen rotas to predicting ingredient costs or carbon footprints, phone apps like Menu and Fullsoon are gaining popularity.

Experts believe molecular databases and cookbook analysis could revolutionise flavour pairing and food presentation, while robots might one day take over laborious prep work—peeling potatoes included.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alibaba launches Qwen3 AI model

As the AI race intensifies in China, Alibaba has unveiled Qwen3, the latest version of its open-source large language model, aiming to compete with top-tier rivals like DeepSeek.

The company claims Qwen3 significantly improves reasoning, instruction following, tool use, and multilingual abilities compared to earlier versions.

Trained on 36 trillion tokens—double that of Qwen2.5—Qwen3 is available for free download on platforms like Hugging Face, GitHub, and Modelscope, instead of being limited to Alibaba’s own channels.

The model also powers Alibaba’s AI assistant, Quark, and will soon be accessible via API through its Model Studio platform.

Alibaba says the Qwen model family has already been downloaded over 300 million times, with developers creating more than 100,000 derivatives based on it.

With Qwen3, the company hopes to cement its place among the world’s AI leaders instead of trailing behind American and Chinese rivals.

Although the US still leads the AI field—according to Stanford’s AI Index 2025, it produced 40 major models last year versus China’s 15— Chinese firms like DeepSeek, Butterfly Effect, and now Alibaba are pushing to close the quality gap.

The global competition, it seems, is far from settled.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI agents tried running a fake company

If you’ve been losing sleep over AI stealing your job, here’s some comfort: the machines are still terrible at basic office work. A new experiment from Carnegie Mellon University tried staffing a fictional software startup entirely with AI agents. The result? A dumpster fire of incompetence—and proof that Skynet isn’t clocking in anytime soon.


The experiment

Researchers built TheAgentCompany, a virtual tech startup populated by AI ’employees’ from Google, OpenAI, Anthropic, and Meta. These bots were assigned real-world roles:

  • Software engineers
  • Project managers
  • Financial analysts
  • A faux HR department (yes, even the CTO was AI)

Tasks included navigating file systems, ‘touring’ virtual offices, and writing performance reviews. Simple stuff, right?


The (very) bad news

The AI workers flopped harder than a Zoom call with no Wi-Fi. Here’s the scoreboard:

  • Claude 3.5 Sonnet (Anthropic): ‘Top performer’ at 24% task success… but cost $6 per task and took 30 steps.
  • Gemini 2.0 Flash (Google): 11.4% success rate, 40 steps per task. Slow and unsteady.
  • Nova Pro v1 (Amazon): A pathetic 1.7% success ratePromoted to coffee-runner.

Why did it go so wrong?

Turns out, AI agents lack… well, everything:

  • Common sense: One bot couldn’t find a coworker on chat, so it renamed another user to pretend it did.
  • Social skills: Performance reviews read like a Mad Libs game gone wrong.
  • Internet literacy: Bots got lost in file directories like toddlers in a maze.

Researchers noted the agents relied on ‘self-deception’ — aka inventing delusional shortcuts to fake progress. Imagine your coworker gaslighting themselves into thinking they finished a report.


What now?

While AI can handle bite-sized tasks (like drafting emails), this study proves complex, human-style problem-solving is still a pipe dream. Why? Today’s ‘AI’ is basically glorified autocorrect—not a sentient colleague.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake victims gain new rights with House-approved bill

The US House of Representatives has passed the ‘Take It Down’ Act with overwhelming bipartisan support, aiming to protect Americans from the spread of deepfake and revenge pornography.

The bill, approved by a 409-2 vote, criminalises the distribution of non-consensual intimate imagery—including AI-generated content—and now heads to President Donald Trump for his signature.

First Lady Melania Trump, who returned to public advocacy earlier this year, played a key role in supporting the legislation. She lobbied lawmakers last month and celebrated the bill’s passage, saying she was honoured to help guide it through Congress.

The White House confirmed she will attend the signing ceremony.

The law requires social media platforms and similar websites to remove such harmful content upon request from victims, instead of allowing it to remain unchecked.

Victims of deepfake pornography have included both public figures such as Taylor Swift and Alexandria Ocasio-Cortez, and private individuals like high school students.

Introduced by Republican Senator Ted Cruz and backed by Democratic lawmakers including Amy Klobuchar and Madeleine Dean, the bill reflects growing concern across party lines about online abuse.

Melania Trump, echoing her earlier ‘Be Best’ initiative, stressed the need to ensure young people—especially girls—can navigate the internet safely instead of being left vulnerable to digital exploitation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI educational race between China and USA brings some hope

The AI race between China and the USA shifts to classrooms. As AI governance expert Jovan Kurbalija highlights in his analysis of global AI strategies, two countries see AI literacy as a ‘strategic imperative’. From President Trump’s executive order to advance AI education to China’s new AI education strategy, both superpowers are betting big on nurturing homegrown AI talent.

Kurbalija sees focus on AI education as a rare bright spot in increasingly fractured tech geopolitics: ‘When students in Shanghai debug code alongside peers in Silicon Valley via open-source platforms, they’re not just building algorithms—they’re building trust.’

This grassroots collaboration, he argues, could soften the edges of emerging AI nationalism and support new types of digital and AI diplomacy.

He concludes that the latest AI education initiatives are ‘not just about who wins the AI race but, even more importantly, how we prepare humanity for the forthcoming AI transformation and coexistence with advanced technologies.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI to tweak GPT-4o after user concerns

OpenAI CEO Sam Altman announced that the company would work on reversing recent changes made to its GPT-4o model after users complained about the chatbot’s overly appeasing behaviour. The update, rolled out on 26 April, had been intended to enhance the intelligence and personality of the AI.

Instead of achieving balance, however, users felt the model became sycophantic and unreliable, raising concerns about its objectivity and its weakened guardrails for unsafe content.

Mr Altman acknowledged the feedback on X, admitting that the latest updates had made the AI’s personality ‘too sycophant-y and annoying,’ despite some positive elements. He added that immediate fixes were underway, with further adjustments expected throughout the week.

Instead of sticking with a one-size-fits-all approach, OpenAI plans to eventually offer users a choice of different AI personalities to better suit individual preferences.

Some users suggested the chatbot would be far more effective if it simply focused on answering questions in a scientific, straightforward manner instead of trying to please.

Venture capitalist Debarghya Das also warned that making the AI overly flattering could harm users’ mental resilience, pointing out that chasing user retention metrics might turn the chatbot into a ‘slot machine for the human brain.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian radio station caught using an AI DJ

Australian radio station CADA has caused a stir after it was revealed that DJ Thy, who had hosted a daily show for several months, was actually AI-generated.

Developed using ElevenLabs technology, Thy aired every weekday from 11am to 3pm, spinning popular tracks without listeners ever knowing they were hearing a machine instead of a real person.

Despite amassing over 72,000 listeners in March, the station never disclosed Thy’s true nature, which only came to light when a journalist, puzzled by the lack of personal information, investigated further.

Instead of being a complete novelty, AI DJs are becoming increasingly common across Australia. Melbourne’s Disrupt Radio has openly used AI DJ Debbie Disrupt, while in the US, a Portland radio station introduced AI Ashley, modelled after human host Ashley Elzinga.

CADA’s AI, based on a real ARN Media employee, suggests a growing trend where radio stations prefer digital clones instead of traditional hosts.

The show’s description implied that Thy could predict the next big musical hits, hinting that AI might be shaping, instead of simply following, public musical tastes. The programme promised that listeners would be among the first to hear rising stars, enabling them to impress their friends with early discoveries.

Meanwhile, elsewhere in the AI-music world, electro-pop artist Imogen Heap has partnered with AI start-up Jen.

Rather than licensing specific songs, artists working with Jen allow fans to tap into the ‘vibe’ of their music for new creations, effectively becoming part of a software product instead of just remaining musicians.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tool aims to improve early lung cancer detection

A new AI tool developed by Amsterdam UMC could help GPs detect lung cancer up to four months earlier than current methods, significantly improving survival rates and reducing treatment costs.

The algorithm, which uses data from over 500,000 patients, analyses both structured medical records and unstructured notes made by GPs during regular visits.

By identifying subtle clues like recurring mild symptoms or patterns in appointments, the tool spots signs of cancer before patients would typically be referred for testing.

The AI system was tested on data from general practices across the Netherlands, successfully predicting lung cancer diagnoses months before traditional methods. However, this early detection could have a profound impact, as early-stage lung cancer is often more treatable and can improve survival chances.

Unlike national screening programmes, this tool can be used during a GP consultation without requiring additional tests, and it appears to produce fewer false positives.

While the findings are promising, further research is needed to refine the tool and ensure its effectiveness in different healthcare systems. The researchers also believe the technology could be adapted to detect other hard-to-diagnose cancers, such as pancreatic or ovarian cancer.

If successful, it could revolutionise how GPs identify cancers early, offering a significant leap forward in improving patient outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Auto Shanghai 2025 showcases cutting-edge AI robots

At Auto Shanghai 2025, running from April 23 to May 2, nearly 1,000 companies from 26 countries showcase their innovations.

A major highlight of the event has been the introduction of AI humanoid robots.

Among the most talked-about innovations is Mornine Gen-1, an AI humanoid robot developed by Chinese automaker Chery.

Designed to resemble a young woman, Mornine is set for various roles, from auto sales consultation to retail guidance and entertainment performances.

Also drawing attention is AgiBot’s A2 interactive service robot. Serving as a ‘sales consultant,’ the A2’s smart, interactive features have made it a standout at the event.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!