Meta sues AI firm over fake nude images created without consent

Meta has filed a lawsuit against Joy Timeline HK Ltd in Hong Kong, accusing the firm of using its platforms to promote a generative AI app called CrushAI.

The app allows users to digitally strip clothes from images of people, often without consent. Meta said the company repeatedly attempted to bypass ad review systems to push harmful content, advertising phrases like ‘see anyone naked’ on Facebook and Instagram.

The lawsuit follows Meta’s broader investigation into ‘nudity’ apps, which are increasingly being used to create sexualised deepfakes. Despite bans on nonconsensual explicit content, the company said such apps evade detection by disguising ads or rotating domain names after bans.

According to research by Cornell Tech, over 8,000 ads linked to CrushAI appeared on Meta platforms in recent months. Meta responded by updating its detection systems with a broader range of flagged terms and emojis.

While many of the manipulated images target celebrities, concerns are growing about the use of such technology to exploit minors. In one case in Florida, two teenagers used similar AI tools to create sexualised images of classmates.

The issue has sparked legal action in the US, where the Take It Down Act, signed into law earlier this year, criminalises the publication of nonconsensual deepfake imagery and simplifies removal processes for victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta hires top AI talent from Google and Sesame

Meta is assembling a new elite AI research team aimed at developing artificial general intelligence (AGI), luring top talent from rivals including Google and AI voice startup Sesame.

Among the high-profile recruits is Jack Rae, a principal researcher from Google DeepMind, and Johan Schalkwyk, a machine learning lead from Sesame.

Meta is also close to finalising a multibillion-dollar investment in Scale AI, a data-labelling startup led by CEO Alexandr Wang, who is also expected to join the new initiative.

The new group, referred to internally as the ‘superintelligence’ team, is central to CEO Mark Zuckerberg’s plan to close the gap with competitors like Google and OpenAI.

Following disappointment over Meta’s recent AI model, Llama 4, Zuckerberg hopes the newly acquired expertise will help improve future models and expand AI capabilities in areas like voice and personalisation.

Zuckerberg has taken a hands-on approach, personally recruiting engineers and researchers, sometimes meeting with them at his homes in California. Meta is reportedly offering compensation packages worth tens of millions of dollars, including equity, to attract leading AI talent.

The company aims to hire around 50 people for the team and is also seeking a chief scientist to help lead the effort.

The broader strategy involves investing heavily in data, chips, and human expertise — three pillars of advanced AI development. By partnering with Scale AI and recruiting high-profile researchers, Meta is trying to strengthen its position in the AI race.

Meanwhile, rivals like Google are reinforcing their defences, with Koray Kavukcuoglu named as chief AI architect in a new senior leadership role to ensure DeepMind’s technologies are more tightly integrated into Google’s products.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Turing Institute urges stronger AI research security

The Alan Turing Institute has warned that urgent action is needed to protect the UK’s AI research from espionage, intellectual property theft and risky international collaborations.

Its Centre for Emerging Technology and Security (CETaS) has published a report calling for a culture shift across academia to better recognise and mitigate these risks.

The report highlights inconsistencies in how security risks are understood within universities and a lack of incentives for researchers to follow government guidelines. Sensitive data, the dual-use potential of AI, and the risk of reverse engineering make the field particularly vulnerable to foreign interference.

Lead author Megan Hughes stressed the need for a coordinated response, urging government and academia to find the right balance between academic freedom and security.

The report outlines 13 recommendations, including expanding support for academic due diligence and issuing clearer guidance on high-risk international partnerships.

Further proposals call for compulsory research security training, better threat communication from national agencies, and standardised risk assessments before publishing AI research.

The aim is to build a more resilient research ecosystem as global interest in UK-led AI innovation continues to grow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta launches AI to teach machines physical reasoning

Meta Platforms has unveiled V-JEPA 2, an open-source AI model designed to help machines understand and interact with the physical world more like humans do.

The technology allows AI agents, including delivery robots and autonomous vehicles, to observe object movement and predict how those objects may behave in response to actions.

The company explained that just as people intuitively understand that a ball tossed into the air will fall due to gravity, AI systems using V-JEPA 2 gain a similar ability to reason about cause and effect in the real world.

Trained using video data, the model recognises patterns in how humans and objects move and interact, helping machines learn to reach, grasp, and reposition items more naturally.

Meta described the tool as a step forward in building AI that can think ahead, plan actions and respond intelligently to dynamic environments. In lab tests, robots powered by V-JEPA 2 performed simple tasks that relied on spatial awareness and object handling.

The company, led by CEO Mark Zuckerberg, is ramping up its AI initiatives to compete with rivals like Microsoft, Google, and OpenAI. By improving machine reasoning through world models such as V-JEPA 2, Meta aims to accelerate its progress toward more advanced AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta and TikTok contest the EU’s compliance charges

Meta and TikTok have taken their fight against an the EU supervisory fee to Europe’s second-highest court, arguing that the charges are disproportionate and based on flawed calculations.

The fee, introduced under the Digital Services Act (DSA), requires major online platforms to pay 0.05% of their annual global net income to cover the European Commission’s oversight costs.

Meta questioned the Commission’s methodology, claiming the levy was based on the entire group’s revenue instead of the specific EU-based subsidiary.

The company’s lawyer told judges it still lacked clarity on how the fee was calculated, describing the process as opaque and inconsistent with the spirit of the law.

TikTok also criticised the charge, alleging inaccurate and discriminatory data inflated its payment.

Its legal team argued that user numbers were double-counted when people switched between devices. The Commission had wrongly calculated fees based on group profits rather than platform-specific earnings.

The Commission defended its approach, saying group resources should bear the cost when consolidated accounts are used. A ruling is expected from the General Court sometime next year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI startup faces lawsuit from Disney and Universal

Two of Hollywood’s most powerful studios, Disney and Universal, have launched a copyright infringement lawsuit against the AI firm Midjourney, accusing it of illegally replicating iconic characters.

The studios claim the San Francisco-based company copied their creative works without permission, describing it as a ‘bottomless pit of plagiarism’.

Characters such as Darth Vader, Elsa, and the Minions were cited in the 143-page complaint, which alleges Midjourney used these images to train its AI system and generate similar content.

Disney and Universal argue that the AI firm failed to invest in the creative process, yet profited heavily from the output — reportedly earning $US300 million in paid subscriptions last year.

Despite early attempts by the studios to raise concerns and propose safeguards already adopted by other AI developers,

Midjourney allegedly ignored them and pressed ahead with further product releases. The company, which calls itself a small, self-funded team of 11, has declined to comment on the lawsuit directly but insists it has a long future ahead.

Disney’s legal chief, Horacio Gutierrez, stressed the importance of protecting creative works that result from decades of investment. While supporting AI as a tool for innovation, he maintained that ‘piracy is piracy’, regardless of whether humans or machines carry it out.

The studios are seeking damages and a court order to stop the AI firm from continuing its alleged copyright violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia halts AI summaries test after backlash

Wikipedia has paused a controversial trial of AI-generated article summaries following intense backlash from its community of volunteer editors.

The Wikimedia Foundation had planned a two-week opt-in test for mobile users using summaries produced by Aya, an open-weight AI model developed by Cohere.

However, the reaction from editors was swift and overwhelmingly negative. The discussion page became flooded with objections, with contributors arguing that such summaries risked undermining the site’s reputation for neutrality and accuracy.

Some expressed concerns that inserting AI content would override Wikipedia’s long-standing collaborative approach by effectively installing a single, unverifiable voice atop articles.

Editors warned that AI-generated summaries lacked proper sourcing and could compromise the site’s credibility. Recent AI blunders by other tech giants, including Google’s glue-on-pizza mishap and Apple’s false death alert, were cited as cautionary examples of reputational risk.

For many, the possibility of similar errors appearing on Wikipedia was unacceptable.

Marshall Miller of the Wikimedia Foundation acknowledged the misstep in communication and confirmed the project’s suspension.

While the Foundation remains interested in exploring AI to improve accessibility, it has committed to ensuring any future implementation involves direct participation from the Wikipedia community.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

INTERPOL cracks down on global cybercrime networks

Over 20,000 malicious IP addresses and domains linked to data-stealing malware have been taken down during Operation Secure, a coordinated cybercrime crackdown led by INTERPOL between January and April 2025.

Law enforcement agencies from 26 countries worked together to locate rogue servers and dismantle criminal networks instead of tackling threats in isolation.

The operation, supported by cybersecurity firms including Group-IB, Kaspersky and Trend Micro, led to the removal of nearly 80 per cent of the identified malicious infrastructure. Authorities seized 41 servers, confiscated over 100GB of stolen data and arrested 32 suspects.

More than 216,000 individuals and organisations were alerted, helping them act quickly by changing passwords, freezing accounts or blocking unauthorised access.

Vietnamese police arrested 18 people, including a group leader found with cash, SIM cards and business records linked to fraudulent schemes. Sri Lankan and Nauruan authorities carried out home raids, arresting 14 suspects and identifying 40 victims.

In Hong Kong, police traced 117 command-and-control servers across 89 internet providers. INTERPOL hailed the effort as proof of the impact of cross-border cooperation in dismantling cybercriminal infrastructure instead of allowing it to flourish undisturbed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IBM sets 2029 target for quantum breakthrough

IBM has set out a detailed roadmap to deliver a practical quantum computer by 2029, marking a major milestone in its long-term strategy.

The company plans to build its ‘Starling’ quantum system at a new data centre in Poughkeepsie, New York, targeting around 200 logical qubits—enough to begin outperforming classical computers in specific tasks instead of lagging due to error correction limitations.

Quantum computers rely on qubits to perform complex calculations, but high error rates have held back their potential. IBM shifted its approach in 2019, designing error-correction algorithms based on real, manufacturable chips instead of theoretical models.

The change, as the company says, will significantly reduce the qubits needed to fix errors.

With confidence in its new method, IBM will build a series of quantum systems until 2027, each advancing toward a larger, more capable machine.

Vice President Jay Gambetta stated the key scientific questions have already been resolved, meaning what remains is primarily an engineering challenge instead of a scientific one.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta strikes $15B deal with Scale AI

Meta Platforms is set to acquire a 49 percent stake in Scale AI for nearly $15 billion, marking its largest-ever deal.

CEO Mark Zuckerberg sees The agreement as a significant move to accelerate Meta’s push into AI instead of relying solely on in-house development.

Scale AI, founded in 2016, supplies curated training data to major players such as OpenAI, Google, Microsoft and Meta. The company expects to more than double its revenue in 2025 to around $2 billion.

Once the deal is finalised, Scale AI CEO Alexandr Wang is expected to join Meta’s new AI team focused on developing artificial general intelligence (AGI).

According to Bloomberg, Zuckerberg is hiring around 50 people for a ‘superintelligence’ team.

The effort aligns with Meta’s broader AI plans, including capital expenditure of up to $65 billion in 2025 to expand its AI infrastructure instead of falling behind rivals in the AI race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!