Deepfake victims gain new rights with House-approved bill

The US House of Representatives has passed the ‘Take It Down’ Act with overwhelming bipartisan support, aiming to protect Americans from the spread of deepfake and revenge pornography.

The bill, approved by a 409-2 vote, criminalises the distribution of non-consensual intimate imagery—including AI-generated content—and now heads to President Donald Trump for his signature.

First Lady Melania Trump, who returned to public advocacy earlier this year, played a key role in supporting the legislation. She lobbied lawmakers last month and celebrated the bill’s passage, saying she was honoured to help guide it through Congress.

The White House confirmed she will attend the signing ceremony.

The law requires social media platforms and similar websites to remove such harmful content upon request from victims, instead of allowing it to remain unchecked.

Victims of deepfake pornography have included both public figures such as Taylor Swift and Alexandria Ocasio-Cortez, and private individuals like high school students.

Introduced by Republican Senator Ted Cruz and backed by Democratic lawmakers including Amy Klobuchar and Madeleine Dean, the bill reflects growing concern across party lines about online abuse.

Melania Trump, echoing her earlier ‘Be Best’ initiative, stressed the need to ensure young people—especially girls—can navigate the internet safely instead of being left vulnerable to digital exploitation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI educational race between China and USA brings some hope

The AI race between China and the USA shifts to classrooms. As AI governance expert Jovan Kurbalija highlights in his analysis of global AI strategies, two countries see AI literacy as a ‘strategic imperative’. From President Trump’s executive order to advance AI education to China’s new AI education strategy, both superpowers are betting big on nurturing homegrown AI talent.

Kurbalija sees focus on AI education as a rare bright spot in increasingly fractured tech geopolitics: ‘When students in Shanghai debug code alongside peers in Silicon Valley via open-source platforms, they’re not just building algorithms—they’re building trust.’

This grassroots collaboration, he argues, could soften the edges of emerging AI nationalism and support new types of digital and AI diplomacy.

He concludes that the latest AI education initiatives are ‘not just about who wins the AI race but, even more importantly, how we prepare humanity for the forthcoming AI transformation and coexistence with advanced technologies.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI to tweak GPT-4o after user concerns

OpenAI CEO Sam Altman announced that the company would work on reversing recent changes made to its GPT-4o model after users complained about the chatbot’s overly appeasing behaviour. The update, rolled out on 26 April, had been intended to enhance the intelligence and personality of the AI.

Instead of achieving balance, however, users felt the model became sycophantic and unreliable, raising concerns about its objectivity and its weakened guardrails for unsafe content.

Mr Altman acknowledged the feedback on X, admitting that the latest updates had made the AI’s personality ‘too sycophant-y and annoying,’ despite some positive elements. He added that immediate fixes were underway, with further adjustments expected throughout the week.

Instead of sticking with a one-size-fits-all approach, OpenAI plans to eventually offer users a choice of different AI personalities to better suit individual preferences.

Some users suggested the chatbot would be far more effective if it simply focused on answering questions in a scientific, straightforward manner instead of trying to please.

Venture capitalist Debarghya Das also warned that making the AI overly flattering could harm users’ mental resilience, pointing out that chasing user retention metrics might turn the chatbot into a ‘slot machine for the human brain.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian radio station caught using an AI DJ

Australian radio station CADA has caused a stir after it was revealed that DJ Thy, who had hosted a daily show for several months, was actually AI-generated.

Developed using ElevenLabs technology, Thy aired every weekday from 11am to 3pm, spinning popular tracks without listeners ever knowing they were hearing a machine instead of a real person.

Despite amassing over 72,000 listeners in March, the station never disclosed Thy’s true nature, which only came to light when a journalist, puzzled by the lack of personal information, investigated further.

Instead of being a complete novelty, AI DJs are becoming increasingly common across Australia. Melbourne’s Disrupt Radio has openly used AI DJ Debbie Disrupt, while in the US, a Portland radio station introduced AI Ashley, modelled after human host Ashley Elzinga.

CADA’s AI, based on a real ARN Media employee, suggests a growing trend where radio stations prefer digital clones instead of traditional hosts.

The show’s description implied that Thy could predict the next big musical hits, hinting that AI might be shaping, instead of simply following, public musical tastes. The programme promised that listeners would be among the first to hear rising stars, enabling them to impress their friends with early discoveries.

Meanwhile, elsewhere in the AI-music world, electro-pop artist Imogen Heap has partnered with AI start-up Jen.

Rather than licensing specific songs, artists working with Jen allow fans to tap into the ‘vibe’ of their music for new creations, effectively becoming part of a software product instead of just remaining musicians.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tool aims to improve early lung cancer detection

A new AI tool developed by Amsterdam UMC could help GPs detect lung cancer up to four months earlier than current methods, significantly improving survival rates and reducing treatment costs.

The algorithm, which uses data from over 500,000 patients, analyses both structured medical records and unstructured notes made by GPs during regular visits.

By identifying subtle clues like recurring mild symptoms or patterns in appointments, the tool spots signs of cancer before patients would typically be referred for testing.

The AI system was tested on data from general practices across the Netherlands, successfully predicting lung cancer diagnoses months before traditional methods. However, this early detection could have a profound impact, as early-stage lung cancer is often more treatable and can improve survival chances.

Unlike national screening programmes, this tool can be used during a GP consultation without requiring additional tests, and it appears to produce fewer false positives.

While the findings are promising, further research is needed to refine the tool and ensure its effectiveness in different healthcare systems. The researchers also believe the technology could be adapted to detect other hard-to-diagnose cancers, such as pancreatic or ovarian cancer.

If successful, it could revolutionise how GPs identify cancers early, offering a significant leap forward in improving patient outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Auto Shanghai 2025 showcases cutting-edge AI robots

At Auto Shanghai 2025, running from April 23 to May 2, nearly 1,000 companies from 26 countries showcase their innovations.

A major highlight of the event has been the introduction of AI humanoid robots.

Among the most talked-about innovations is Mornine Gen-1, an AI humanoid robot developed by Chinese automaker Chery.

Designed to resemble a young woman, Mornine is set for various roles, from auto sales consultation to retail guidance and entertainment performances.

Also drawing attention is AgiBot’s A2 interactive service robot. Serving as a ‘sales consultant,’ the A2’s smart, interactive features have made it a standout at the event.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MTN confirms cybersecurity breach and data exposure

MTN Group has confirmed a cybersecurity breach that exposed personal data of some customers in certain markets. The telecom giant assured the public, however, that its core infrastructure remains secure and fully operational.

The breach involved an unknown third party gaining unauthorised access to parts of MTN’s systems, though the company emphasised that critical services, including mobile money and digital wallets, were unaffected.

In a statement released on Thursday, MTN clarified that investigations are ongoing, but no evidence suggests any compromise of its central infrastructure, such as its network, billing, or financial service platforms.

MTN has alerted the law enforcement of South Africa and is collaborating with regulatory bodies in the affected regions.

The company urged customers to take steps to safeguard their data, such as monitoring financial statements, using strong passwords, and being cautious with suspicious communications.

MTN also recommended enabling multi-factor authentication and avoiding sharing sensitive information like PINs or passwords through unsecured channels.

While investigations continue, MTN has committed to providing updates as more details emerge, reiterating its dedication to transparency and customer protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

North Korean hackers create fake US firms to target crypto developers

North Korea’s Lazarus Group has launched a sophisticated campaign to infiltrate the cryptocurrency industry by registering fake companies in the US and using them to lure developers into downloading malware.

According to a Reuters investigation, these US-registered shell companies, including Blocknovas LLC and Softglide LLC, were set up using false identities and addresses, giving the operation a veneer of legitimacy instead of drawing suspicion.

Once established, the fake firms posted job listings through legitimate platforms like LinkedIn and Upwork to attract developers. Applicants were guided through fake interview processes and instructed to download so-called test assignments.

Instead of harmless software, the files installed malware that enabled the hackers to steal passwords, crypto wallet keys, and other sensitive information.

The FBI has since seized Blocknovas’ domain and confirmed its connection to Lazarus, labelling the campaign a significant evolution in North Korea’s cyber operations.

These attacks were supported by Russian infrastructure, allowing Lazarus operatives to bypass North Korea’s limited internet access.

Tools such as VPNs and remote desktop software enabled them to manage operations, communicate over platforms like GitHub and Telegram, and even record training videos on how to exfiltrate data.

Silent Push researchers confirmed that the campaign has impacted hundreds of developers and likely fed some stolen access to state-aligned espionage units instead of limiting the effort to theft.

Officials from the US, South Korea, and the UN say the revenue from such cyberattacks is funneled into North Korea’s nuclear missile programme. The FBI continues to investigate and has warned that not only the hackers but also those assisting their operations could face serious consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Politeness to AI is about us, not them

In his thought-provoking blog post ‘Politeness in 2025: Why are we so kind to AI?’, Dr Jovan Kurbalija explores why nearly 80% of users in the UK and the USA instinctively say ‘please’ and ‘thank you’ to AI platforms like ChatGPT.

While machines lack feelings, our politeness reveals more about human psychology and cultural habits than the technology itself. For many, courtesy is a deeply ingrained reflex shaped by personality traits such as agreeableness and lifelong social conditioning, extending kindness even to non-sentient entities.

However, not everyone shares this approach. Some users are driven by subtle fears of future AI dominance, using politeness as a safeguard, while others prioritise efficiency, viewing AI purely as a tool undeserving of social niceties.

A rational minority dismisses politeness altogether, recognising AI as nothing more than code. Dr Kurbalija highlights that these varied responses reflect how we perceive and interact with technology, influenced by both evolutionary instincts and modern cognitive biases.

Beyond individual behaviour, Kurbalija points to a deeper issue: our tendency to humanise AI and expect it to behave like us, unlike traditional machines. This blurring of lines between tool and teammate raises important questions about how our perceptions shape AI’s role in society.

Ultimately, he suggests that politeness toward AI isn’t about the machine—it reflects the kind of humans we aspire to be, preserving empathy and grace in an increasingly digital world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT expands Deep Research to more users

A new feature introduced by ChatGPT in February, called Deep Research, is gradually becoming available across its user base. This includes subscribers on the Plus, Team, and Pro plans, while even those using the free ChatGPT app on iOS and Android can now access a simplified version.

Designed to carry out in-depth reports and analyses within minutes, Deep Research uses OpenAI’s o3 model to perform tasks that would otherwise take people hours to complete.

Instead of limiting access to paid users alone, OpenAI has rolled out a lightweight version powered by its o4-mini AI model for free users. Although responses are shorter, the company insists the quality and depth remain comparable.

The more efficient model also helps reduce costs, while delivering what OpenAI calls ‘nearly as intelligent’ results as the full version.

The feature’s capabilities stretch from suggesting personalised product purchases like cars or TVs, to helping with complex decisions such as choosing a university or analysing market trends.

Free-tier users are currently allowed up to five Deep Research tasks each month, whereas Plus and Team plans get ten full and fifteen lightweight tasks. Pro users enjoy a generous 125 tasks of each version per month, and EDU and Enterprise plans will begin access next week.

Once users hit their full version limit, they’ll be automatically shifted to the lightweight tool instead of losing access altogether. Meanwhile, Google’s GeminiAI offers a similar function for its paying customers, also aiming to deliver quick, human-level research and analysis.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!