Hollywood studios take legal action against MiniMax for AI copyright infringement

Disney, Warner Bros. Discovery and NBCUniversal have filed a lawsuit in California against Chinese AI company MiniMax, accusing it of large-scale copyright infringement.

The studios allege that MiniMax’s Hailuo AI service generates unauthorised images and videos featuring well-known characters such as Darth Vader, marketing itself as a ‘Hollywood studio in your pocket’ instead of respecting copyright laws.

According to the complaint, MiniMax, reportedly worth $4 billion, ignored cease-and-desist requests and continues to profit from copyrighted works. The studios argue that the company could easily implement safeguards, pointing to existing controls that already block violent or explicit content.

MiniMax’s approach, as they claim, represents a serious threat to both creators and the broader film industry, which contributes hundreds of billions of dollars to the US economy.

Plaintiffs, including Disney’s Marvel and Lucasfilm units, Universal’s DreamWorks Animation and Warner Bros.’ DC Comics, are seeking statutory damages of up to $150,000 per infringed work or unspecified compensation.

They are also asking for an injunction to prevent MiniMax from continuing its alleged violations instead of simply paying damages.

The Motion Picture Association has backed the lawsuit, with its chairman Charles Rivkin warning that unchecked copyright infringement could undermine millions of jobs and the cultural value created by the American film industry.

MiniMax, based in Shanghai, has not responded publicly to the claims but has previously described itself as a global AI foundation model company with over 157 million users worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

When language models fabricate truth: AI hallucinations and the limits of trust

AI has come far from rule-based systems and chatbots with preset answers. Large language models (LLMs), powered by vast amounts of data and statistical prediction, now generate text that can mirror human speech, mimic tone, and simulate expertise, but also produce convincing hallucinations that blur the line between fact and fiction.

From summarising policy to drafting contracts and responding to customer queries, these tools are becoming embedded across industries, governments, and education systems.

As their capabilities grow, so does the underlying problem that many still underestimate. These systems frequently produce convincing but entirely false information. Often referred to as ‘AI hallucinations‘, such factual distortions pose significant risks, especially when users trust outputs without questioning their validity.

Once deployed in high-stakes environments, from courts to political arenas, the line between generative power and generative failure becomes more challenging to detect and more dangerous to ignore.

When facts blur into fiction

AI hallucinations are not simply errors. They are confident statements presented as facts, even based on probability. Language models are designed to generate the most likely next word, not the correct one. That difference may be subtle in casual settings, but it becomes critical in fields like law, healthcare, or media.

One such example emerged when an AI chatbot misrepresented political programmes in the Netherlands, falsely attributing policy statements about Ukraine to the wrong party. However, this error spread misinformation and triggered official concern. The chatbot had no malicious intent, yet its hallucination shaped public discourse.

Mistakes like these often pass unnoticed because the tone feels authoritative. The model sounds right, and that is the danger.

When language models hallucinate, they sound credible, and users believe them. Discover why this is a growing risk.
Image via AI / ChatGPT

Why large language models hallucinate

Hallucinations are not bugs in the system. They are a direct consequence of the way how language models are built. Trained to complete text based on patterns, these systems have no fundamental understanding of the world, no memory of ‘truth’, and no internal model of fact.

A recent study reveals that even the way models are tested may contribute to hallucinations. Instead of rewarding caution or encouraging honesty, current evaluation frameworks favour responses that appear complete and confident, even when inaccurate. The more assertive the lie, the better it scores.

Alongside these structural flaws, real-world use cases reveal additional causes. Here are the most frequent causes of AI hallucinations:

  • Vague or ambiguous prompts
  • Lack of specificity forces the model to fill gaps with speculative content that may not be grounded in real facts.
  • Overly long conversations
  • As prompt history grows, especially without proper context management, models lose track and invent plausible answers.
  • Missing knowledge
  • When a model lacks reliable training data on a topic, it may produce content that appears accurate but is fabricated.
  • Leading or biassed prompts
  • Inputs that suggest a specific answer can nudge the model into confirming something untrue to match expectations.
  • Interrupted context due to connection issues
  • Especially with browser-based tools, a brief loss of session data can cause the model to generate off-track or contradictory outputs.
  • Over-optimisation for confidence
  • Most systems are trained to sound fluent and assertive. Saying ‘I don’t know’ is statistically rare unless explicitly prompted.

Each of these cases stems from a single truth. Language models are not fact-checkers. They are word predictors. And prediction, without verification, invites fabrication.

The cost of trust in flawed systems

Hallucinations become more dangerous not when they happen, but when they are believed.

Users may not question the output of an AI system if it appears polished, grammatically sound, and well-structured. This perceived credibility can lead to real consequences, including legal documents based on invented cases, medical advice referencing non-existent studies, or voters misled by political misinformation.

In low-stakes scenarios, hallucinations may lead to minor confusion. In high-stakes contexts, the same dynamic can result in public harm or institutional breakdown. Once generated, an AI hallucination can be amplified across platforms, indexed by search engines, and cited in real documents. At that point, it becomes a synthetic fact.

Can hallucinations be fixed?

Some efforts are underway to reduce hallucination rates. Retrieval-augmented generation (RAG), fine-tuning on verified datasets, and human-in-the-loop moderation can improve reliability. Still, no method has eliminated hallucinations.

The deeper issue is how language models are rewarded, trained, and deployed. Without institutional norms prioritising verifiability and technical mechanisms that can flag uncertainty, hallucinations will remain embedded in the system.

Even the most capable AI models must include humility. The ability to say ‘I don’t know’ is still one of the rarest responses in the current landscape.

How AI hallucinations mislead users and shape decisions
Image via AI / ChatGPT

Hallucinations won’t go away. Responsibility must step in.

Language models are not truth machines. They are prediction engines trained on vast and often messy human data. Their brilliance lies in fluency, but fluency can easily mask fabrication.

As AI tools become part of our legal, political, and civic infrastructure, institutions and users must approach them critically. Trust in AI should never be passive. And without active human oversight, hallucinations may not just mislead; they may define the outcome.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia outlines guidelines for social media age ban

Australia has released its regulatory guidance for the incoming social media age restriction law, which takes effect on December 10. Users under 16 will be barred from holding accounts on most major platforms, including Instagram, TikTok, and Facebook.

The new guidance details what are considered ‘reasonable steps’ for compliance. Platforms must detect and remove underage accounts, communicating clearly with affected users. It remains uncertain whether removed accounts will have their content deleted or if they can be reactivated once the user turns 16.

Platforms are also expected to block attempts to re-register, including the use of VPNs or other workarounds. Companies are encouraged to implement a multi-step age verification process and provide users with a range of options, rather than relying solely on government-issued identification.

Blanket age verification won’t be required, nor will platforms need to store personal data from verification processes. Instead, companies must demonstrate effectiveness through system-level records. Existing data, such as an account’s creation date, may be used to estimate age.

Under-16s will still be able to view content without logging in, for example, watching YouTube videos in a browser. However, shared access to adult accounts on family devices could present enforcement challenges.

Communications Minister Anika Wells stated that there is ‘no excuse for non-compliance.’ Each platform must now develop its own strategy to meet the law’s requirements ahead of the fast-approaching deadline.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI will kill middle-ground media, but raw content will thrive

Advertising is heading for a split future. By 2030, brands will run hyper-personalised AI campaigns or embrace raw human storytelling. Everything in between will vanish.

AI-driven advertising will go far beyond text-to-image gimmicks. These adaptive systems will combine social trends, search habits, and first-party data to create millions of real-time ad variations.

The opposite approach will lean into imperfection, featuring unpolished TikToks, founder-shot iPhone videos, and authentic and alive content. Audiences reward authenticity over carefully scripted, generic campaigns.

Mid-tier, polished, forgettable, creative work will be the first to fade away. AI can replicate it instantly, and audiences will scroll past it without noticing.

Marketers must now pick a side: feed AI with data and scale personalisation, or double down on community-driven, imperfect storytelling. The middle won’t survive.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China proposes independent oversight committees to strengthen data protection

The Cyberspace Administration of China (CAC) has proposed new rules requiring major online platforms to establish independent oversight committees focused on personal data protection. The draft regulation, released Friday, 13 September 2025, is open for public comment until 12 October 2025.

Under the proposal, platforms with large user bases and complex operations must form committees of at least seven members, two-thirds of whom must be external experts without ties to the company. These experts must have at least three years of experience in data security and be well-versed in relevant laws and standards.

The committees will oversee sensitive data handling, cross-border transfers, security incidents, and regulatory compliance. They are also tasked with maintaining open communication channels with users about data concerns.

If a platform fails to act and offers unsatisfactory reasons, the issue can be escalated to provincial regulators in China.

The CAC says the move aims to enhance transparency and accountability by involving independent experts in monitoring and flagging high-risk data practices.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

3D figurine craze takes off with Google Gemini update

Google’s Gemini latest update has sparked a social media craze by allowing users to transform 2D photos into lifelike 3D figurines. The feature, part of Gemini 2.5 Flash Image, has quickly become the standout trend from the update.

Instead of serving as a photo-editing tool, Gemini now helps users turn selfies, portraits, and pet photos into stylized statuettes. The images resemble collectable vinyl or resin figures, with smooth finishes and polished detailing.

The digital figurine trend blends personalisation with creativity, allowing users to reimagine themselves or loved ones as miniature display pieces. The playful results have been widely shared across platforms, driving renewed engagement with Google’s AI suite.

The figurine generator also complements Gemini’s other creative functions, such as image combination and style transformation, which allow users to experiment with entirely new aesthetics. Together, these tools extend Gemini’s appeal beyond simple photo correction.

While other platforms have offered 3D effects, Gemini’s version produces highly polished results in seconds, democratising what was once a niche 3D modelling skill. For many, it is the most accessible way to turn memories into digital art.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan-backed AI avatar to highlight climate risks at Osaka Expo

An AI avatar named Una will be presented at the UN pavilion during the 2025 World Expo in Osaka later in the month as part of efforts to promote climate action.

The anime-inspired character, developed with support from the Japanese government, will use 3D hologram technology to engage visitors from 29 September to 4 October.

Una was launched online in May and can respond automatically in multiple languages, including English and Japanese. She was created under the Pacific Green Transformation Project, which supports renewable energy initiatives such as electric vehicles in Samoa and hydropower in Vanuatu.

Her role is to share stories of Pacific island nations facing the impacts of rising sea levels and raise awareness about climate change.

Kanni Wignaraja, UN assistant secretary-general and regional director for Asia and the Pacific, described Una as a strong voice for threatened communities. Influenced by Japanese manga and anime, she is designed to act like a cultural ambassador who connects Pacific struggles with Japanese audiences.

Pacific sea levels have risen by more than 15 centimetres in some regions over the past three decades, leading to flooding, crop damage and migration fears. The risks are existential for nations like Tuvalu, with an average elevation of just two metres.

The UN hopes Una will encourage the public to support renewable energy adoption and climate resilience in vulnerable regions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI search tools challenge Google’s dominance

AI tools are increasingly reshaping how people search online, with large language models like ChatGPT drawing millions away from traditional engines.

Montreal-based lawyer and consultant Anja-Sara Lahady says she now turns to ChatGPT instead of Google for everyday tasks such as meal ideas, interior decoration tips and drafting low-risk emails. She describes it as a second assistant rather than a replacement for legal reasoning.

ChatGPT’s weekly user base has surged to around 800 million, double the figure reported in 2025. Data shows that nearly 6% of desktop searches are already directed to language models, compared with barely half that rate a year ago.

Academics such as Professor Feng Li argue that users favour AI tools because they reduce cognitive effort by providing clear summaries instead of multiple links. However, he warns that verification remains essential due to factual errors.

Google insists its search activity continues to expand, supported by AI Overviews and AI Mode, which offer more conversational and tailored answers.

Yet, testimony in a US antitrust case revealed that Google searches on Apple devices via Safari declined for the first time in two decades, underlining the competitive pressure from AI.

The rise of language models is also forcing a shift in digital marketing. Agencies report that LLMs highlight trusted websites, press releases and established media rather than social media content.

This change may influence consumer habits, with evidence suggesting that referrals from AI systems often lead to higher-quality sales conversions. For many users, AI now represents a faster and more personal route to decisions on products, travel or professional tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI challenges how students prepare for exams

Australia’s Year 12 students are the first to complete their final school years with widespread access to AI tools such as ChatGPT.

Educators warn that while the technology can support study, it risks undermining the core skills of independent thinking and writing. In English, the only compulsory subject, critical thinking is now viewed as more essential than ever.

Trials in New South Wales and South Australia use AI programs designed to guide rather than provide answers, but teachers remain concerned about how to verify work and ensure students value their own voices.

Experts argue that exams, such as the VCE English paper in October, highlight the reality that AI cannot sit assessments. Students must still practise planning, drafting and reflecting on ideas, skills which remain central to academic success.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Privacy-preserving AI gets a boost with Google’s VaultGemma model

Google has unveiled VaultGemma, a new large language model built to offer cutting-edge privacy through differential privacy. The 1-billion-parameter model is based on Google’s Gemma architecture and is described as the most powerful differentially private LLM to date.

Differential privacy adds mathematical noise to data, preventing the identification of individuals while still producing accurate overall results. The method has long been used in regulated industries, but has been challenging to apply to large language models without compromising performance.

VaultGemma is designed to eliminate that trade-off. Google states that the model can be trained and deployed with differential privacy enabled, while maintaining comparable stability and efficiency to non-private LLMs.

This breakthrough could have significant implications for developers building privacy-sensitive AI systems, ranging from healthcare and finance to government services. It demonstrates that sensitive data can be protected without sacrificing speed or accuracy.

Google’s research teams say the model will be released with open-source tools to help others adopt privacy-preserving techniques. The move comes amid rising regulatory and public scrutiny over how AI systems handle personal data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!