Two of Hollywood’s most powerful studios, Disney and Universal, have launched a copyright infringement lawsuit against the AI firm Midjourney, accusing it of illegally replicating iconic characters.
The studios claim the San Francisco-based company copied their creative works without permission, describing it as a ‘bottomless pit of plagiarism’.
Characters such as Darth Vader, Elsa, and the Minions were cited in the 143-page complaint, which alleges Midjourney used these images to train its AI system and generate similar content.
Disney and Universal argue that the AI firm failed to invest in the creative process, yet profited heavily from the output — reportedly earning $US300 million in paid subscriptions last year.
Despite early attempts by the studios to raise concerns and propose safeguards already adopted by other AI developers,
Midjourney allegedly ignored them and pressed ahead with further product releases. The company, which calls itself a small, self-funded team of 11, has declined to comment on the lawsuit directly but insists it has a long future ahead.
Disney’s legal chief, Horacio Gutierrez, stressed the importance of protecting creative works that result from decades of investment. While supporting AI as a tool for innovation, he maintained that ‘piracy is piracy’, regardless of whether humans or machines carry it out.
The studios are seeking damages and a court order to stop the AI firm from continuing its alleged copyright violations.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Wikipedia has paused a controversial trial of AI-generated article summaries following intense backlash from its community of volunteer editors.
The Wikimedia Foundation had planned a two-week opt-in test for mobile users using summaries produced by Aya, an open-weight AI model developed by Cohere.
However, the reaction from editors was swift and overwhelmingly negative. The discussion page became flooded with objections, with contributors arguing that such summaries risked undermining the site’s reputation for neutrality and accuracy.
Some expressed concerns that inserting AI content would override Wikipedia’s long-standing collaborative approach by effectively installing a single, unverifiable voice atop articles.
Editors warned that AI-generated summaries lacked proper sourcing and could compromise the site’s credibility. Recent AI blunders by other tech giants, including Google’s glue-on-pizza mishap and Apple’s false death alert, were cited as cautionary examples of reputational risk.
For many, the possibility of similar errors appearing on Wikipedia was unacceptable.
Marshall Miller of the Wikimedia Foundation acknowledged the misstep in communication and confirmed the project’s suspension.
While the Foundation remains interested in exploring AI to improve accessibility, it has committed to ensuring any future implementation involves direct participation from the Wikipedia community.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Over 20,000 malicious IP addresses and domains linked to data-stealing malware have been taken down during Operation Secure, a coordinated cybercrime crackdown led by INTERPOL between January and April 2025.
Law enforcement agencies from 26 countries worked together to locate rogue servers and dismantle criminal networks instead of tackling threats in isolation.
The operation, supported by cybersecurity firms including Group-IB, Kaspersky and Trend Micro, led to the removal of nearly 80 per cent of the identified malicious infrastructure. Authorities seized 41 servers, confiscated over 100GB of stolen data and arrested 32 suspects.
More than 216,000 individuals and organisations were alerted, helping them act quickly by changing passwords, freezing accounts or blocking unauthorised access.
Vietnamese police arrested 18 people, including a group leader found with cash, SIM cards and business records linked to fraudulent schemes. Sri Lankan and Nauruan authorities carried out home raids, arresting 14 suspects and identifying 40 victims.
In Hong Kong, police traced 117 command-and-control servers across 89 internet providers. INTERPOL hailed the effort as proof of the impact of cross-border cooperation in dismantling cybercriminal infrastructure instead of allowing it to flourish undisturbed.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The African School of Internet Governance (AfriSIG) convened in Dar Es Salaam, Tanzania, from 23 to 28 May 2025, bringing together a broad mix of African and international stakeholders for intensive internet, ICT, and data governance training. As a precursor to the African Internet Governance Forum (AfIGF), the school aimed to strengthen civil society, public, and private sector expertise in navigating Africa’s rapidly evolving digital landscape.
Katherine Getao spoke about cybersecurity and #GenevaDialogue at the African School on Internet Governance (AfriSIG), highlighting the importance of multistakeholder cooperation in implementing cyber norms.
Representing Diplo, Dr Katherine Getao delivered a keynote on ‘Cybersecurity and Cybercrime in Africa,’ emphasising the continent’s urgent need to build strong digital defences amid rising cyber threats. While the challenges are pressing, she pointed out that they also open avenues for youth employment and entrepreneurship, especially in the cybersecurity sector.
Dr Getao also stressed the significance of African participation in global policy dialogues, such as the Geneva Dialogue, to ensure the continent’s digital priorities are heard and reflected in international frameworks. Drawing from her experience with the UN Group of Governmental Experts, she advocated for Africa to be more active in shaping responsible state behaviour in cyberspace.
The event’s panel discussions and workshops further explored how African voices can better leverage platforms like the Internet Governance Forum to influence global tech governance. For Diplo and initiatives like the Geneva Dialogue, AfriSIG was a key venue for aligning African digital development with international policy momentum.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
IBM has set out a detailed roadmap to deliver a practical quantum computer by 2029, marking a major milestone in its long-term strategy.
The company plans to build its ‘Starling’ quantum system at a new data centre in Poughkeepsie, New York, targeting around 200 logical qubits—enough to begin outperforming classical computers in specific tasks instead of lagging due to error correction limitations.
Quantum computers rely on qubits to perform complex calculations, but high error rates have held back their potential. IBM shifted its approach in 2019, designing error-correction algorithms based on real, manufacturable chips instead of theoretical models.
The change, as the company says, will significantly reduce the qubits needed to fix errors.
With confidence in its new method, IBM will build a series of quantum systems until 2027, each advancing toward a larger, more capable machine.
Vice President Jay Gambetta stated the key scientific questions have already been resolved, meaning what remains is primarily an engineering challenge instead of a scientific one.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta Platforms is set to acquire a 49 percent stake in Scale AI for nearly $15 billion, marking its largest-ever deal.
CEO Mark Zuckerberg sees The agreement as a significant move to accelerate Meta’s push into AI instead of relying solely on in-house development.
Scale AI, founded in 2016, supplies curated training data to major players such as OpenAI, Google, Microsoft and Meta. The company expects to more than double its revenue in 2025 to around $2 billion.
Once the deal is finalised, Scale AI CEO Alexandr Wang is expected to join Meta’s new AI team focused on developing artificial general intelligence (AGI).
The effort aligns with Meta’s broader AI plans, including capital expenditure of up to $65 billion in 2025 to expand its AI infrastructure instead of falling behind rivals in the AI race.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In a new blog post titled The Gentle Singularity, OpenAI CEO Sam Altman predicted that AI systems capable of producing ‘novel insights’ may arrive as early as 2026.
While Altman’s essay blends optimism with caution, it subtly signals the company’s next central ambition — creating AI that goes beyond repeating existing knowledge and begins generating original ideas instead of mimicking human reasoning.
Altman’s comments echo a broader industry trend. Researchers are already using OpenAI’s recent o3 and o4-mini models to generate new hypotheses. Competitors like Google, Anthropic and FutureHouse are also shifting their focus towards scientific discovery.
Google’s AlphaEvolve has reportedly devised novel solutions to complex maths problems, while FutureHouse claims to have built AI capable of genuine scientific breakthroughs.
Despite the optimism, experts remain sceptical. Critics argue that AI still struggles to ask meaningful questions, a key ingredient for genuine insight.
Former OpenAI researcher Kenneth Stanley, now leading Lila Sciences, says generating creative hypotheses is a more formidable challenge than agentic behaviour. Whether OpenAI achieves the leap remains uncertain, but Altman’s essay may hint at the company’s next bold step.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Cybersecurity researchers have uncovered a brief but significant leak of over 600 gigabytes of data, exposing information on millions of Chinese citizens.
The haul, containing WeChat, Alipay, banking, and residential records, is part of a centralised system, possibly aimed at large-scale surveillance instead of a random data breach.
According to research from Cybernews and cybersecurity consultant Bob Diachenko, the data was likely used to build individuals’ detailed behavioural, social and economic profiles.
They warned the information could be exploited for phishing, fraud, blackmail or even disinformation campaigns instead of remaining dormant. Although only 16 datasets were reviewed before the database vanished, they indicated a highly organised and purposeful collection effort.
The source of the leak remains unknown, but the scale and nature of the data suggest it may involve government-linked or state-backed entities rather than lone hackers.
The exposed information could allow malicious actors to track residence locations, financial activity and personal identifiers, placing millions at risk instead of keeping their lives private and secure.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US Social Security Administration is launching digital access to Social Security numbers in the summer of 2025 through its ‘My Social Security’ portal. The initiative aims to improve convenience, reduce physical card replacement delays, and protect against identity theft.
The digital rollout responds to the challenges of outdated paper cards, rising fraud risks, and growing demand for remote access to US government services. Cybersecurity experts also recommend using VPNs, antivirus software, and identity monitoring services to guard against phishing scams and data breaches.
While it promises faster and more secure access, experts urge users to bolster account protection through strong passwords, two-factor authentication, and avoidance of public Wi-Fi when accessing sensitive data.
Users should regularly check their credit reports and SSA records and consider requesting an IRS PIN to prevent tax-related fraud. The SSA says this move will make Social Security more efficient without compromising safety.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has revealed that its annualised revenue has surged to $10 billion as of June 2025, nearly doubling since December 2024, when it stood at $5.5 billion.
The rapid growth is driven by the widespread adoption of its ChatGPT AI models across consumer and business markets, putting the company on course to meet its earlier goal of $12.7 billion in revenue for the whole year.
The $10 billion figure excludes licensing income from Microsoft, a major investor, and some large one-off contracts, according to an OpenAI spokesperson. Despite recording a loss of about $5 billion last year, OpenAI’s impressive revenue scale places it well ahead of many rivals benefiting from the AI boom.
Other players in the AI space are also seeing strong growth. For instance, Anthropic recently surpassed $3 billion in annualised revenue, driven by startup demand using its code-generation models. Meanwhile, OpenAI plans to raise up to $40 billion in new funding, valuing the company at $300 billion.
Since launching ChatGPT over two years ago, OpenAI has expanded its offerings with various subscription plans and services. The company reported 500 million weekly active users as of March 2025, underscoring its dominant position in the AI market.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!