Qwen3-Omni tops Hugging Face as China’s open AI challenge grows

Alibaba’s Qwen3-Omni multimodal AI system has quickly risen to the top of Hugging Face’s trending model list, challenging closed systems from OpenAI and Google. The series unifies text, image, audio, and video processing in a single model, signalling the rapid growth of Chinese open-source AI.

Qwen3-Omni-30B-A3B currently leads Hugging Face’s list, followed by the image-editing model Qwen-Image-Edit-2509. Alibaba’s cloud division describes Qwen3-Omni as the first fully integrated multimodal AI framework built for real-world applications.

Self-reported benchmarks suggest Qwen3-Omni outperforms Qwen2.5-Omni-7B, OpenAI’s GPT-4o, and Google’s Gemini-2.5-Flash, known as ‘Nano Banana’, in audio recognition, comprehension, and video understanding tasks.

Open-source dominance is growing, with Alibaba’s models taking half the top 10 spots on Hugging Face rankings. Tencent, DeepSeek, and OpenBMB filled most of the remaining positions, leaving IBM as the only Western representative.

The ATOM Project warned that US leadership in AI could erode as open models from China gain adoption. It argued that China’s approach draws businesses and researchers away from American systems, which have become increasingly closed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Gemini’s image model powers Google’s new Mixboard platform

Google has launched Mixboard, an experimental AI tool designed to help users visually explore, refine, and expand ideas both textually and visually. The Gemini 2.5 Flash model powers the platform and is now available for free in beta for users in the United States.

Mixboard provides an open canvas where users can begin with pre-built templates or custom prompts to create project boards. It can be used for tasks such as planning events, home decoration, or organising inspirational images, presenting an overall mood for a project.

Users can upload their own images or generate new ones by describing what they want to see. The tool supports iterative editing, allowing minor tweaks or combining visuals into new compositions through Google’s Nano Banana image model.

Quick actions like regenerating and others like this enable users to explore variations with a single click. The tool also allows text generation based on context from images placed on the board, helping tie visuals to written ideas.

Google says Mixboard is part of its push to make Gemini more useful for creative work. Since the launch of Nano Banana in August, the Gemini app has overtaken ChatGPT to rank first in the US App Store.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn default AI data sharing faces Dutch privacy watchdog scrutiny

The Dutch privacy watchdog, Autoriteit Persoonsgegevens (AP), is warning LinkedIn users in the Netherlands to review their settings to prevent their data from being used for AI training.

LinkedIn plans to use names, job titles, education history, locations, skills, photos, and public posts from European users to train its systems. Private messages will not be included; however, the sharing option is enabled by default.

AP Deputy Chair Monique Verdier said the move poses significant risks. She warned that once personal data is used to train a model, it cannot be removed, and its future uses are unpredictable.

LinkedIn, headquartered in Dublin, falls under the jurisdiction of the Data Protection Commission in Ireland, which will determine whether the plan can proceed. The AP said it is working with Irish and EU counterparts and has already received complaints.

Users must opt out by 3 November if they do not wish to have their data used. They can disable the setting via the AP’s link or manually in LinkedIn under ‘settings & privacy’ → ‘data privacy’ → ‘data for improving generative AI’.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Secrets sprawl flagged as top software supply chain risk in Australia

Avocado Consulting urges Australian organisations to boost software supply chain security after a high-alert warning from the Australian Cyber Security Centre (ACSC). The alert flagged threats, including social engineering, stolen tokens, and manipulated software packages.

Dennis Baltazar of Avocado Consulting said attackers combine social engineering with living-off-the-land techniques, making attacks appear routine. He warned that secrets left across systems can turn small slips into major breaches.

Baltazar advised immediate audits to find unmanaged privileged accounts and non-human identities. He urged embedding security into workflows by using short-lived credentials, policy-as-code, and default secret detection to reduce incidents and increase development speed for users in Australia.

Avocado Consulting advises organisations to eliminate secrets from code and pipelines, rotate tokens frequently, and validate every software dependency by default using version pinning, integrity checks, and provenance verification. Monitoring CI/CD activity for anomalies can also help detect attacks early.

Failing to act could expose cryptographic keys, facilitate privilege escalation, and result in reputational and operational damage. Avocado Consulting states that secure development practices must become the default, with automated scanning and push protection integrated into the software development lifecycle.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Spanish joins Google’s global AI Mode expansion

Google is rapidly expanding AI Mode, its generative AI-powered search assistant. The company has announced that the feature is now rolling out globally in Spanish. Spanish speakers can now interact with AI Mode to ask complex questions that traditional Search handles poorly.

AI Mode has seen swift adoption since its launch earlier this year. First introduced in March, the feature was rolled out to users across the US in May, followed by its first language expansion earlier this month.

Hindi, Indonesian, Japanese, Korean, and Brazilian Portuguese were the first languages added, and Spanish now joins the list. Google says more languages will follow soon as part of its global AI Mode rollout.

Google says the feature is designed to work alongside Search, not replace it, offering conversational answers with links to supporting sources. The company has stressed that responses are generated with safety filters and fact-checking layers.

The rollout reflects Google’s broader strategy to integrate generative AI into its ecosystem, spanning Search, Workspace, and Android. AI Mode will evolve with multimodal support and tighter integration with other Google services.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI image war heats up as ByteDance unveils Seedream 4.0

ByteDance has unveiled Seedream 4.0, its latest AI-powered image generation model, which it claims outperforms Google DeepMind’s Gemini 2.5 Flash Image. The launch signals ByteDance’s bid to rival leading creative AI tools.

Developed by ByteDance’s Seed division, the model combines advanced text-to-image generation with fast, precise image editing. Internal testing reportedly showed superior prompt accuracy, image alignment, and visual quality compared to US-developed DeepMind’s system.

Artificial Analysis, an independent AI benchmarking firm, called Seedream 4.0 a significant step forward. The model integrates Seedream 3.0’s generation capability with SeedEdit 3.0’s editing tools while maintaining a price of US$30 per 1,000 generations.

ByteDance claims that Seedream 4.0 runs over 10 times faster than earlier versions, enhancing the user experience with near-instant image inference. Early users have praised its ability to make quick, text-prompted edits with high accuracy.

The tool is now available to users in China through Jimeng and Doubao AI apps and businesses via Volcano Engine, ByteDance’s cloud platform. A formal technical report supporting the company’s claims has not yet been released.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

JFTC study and MSCA shape Japan’s AI oversight strategy

Japan is adopting a softer approach to regulating generative AI, emphasising innovation while managing risks. Its 2025 AI Bill promotes development and safety, supported by international norms and guidelines.

The Japan Fair Trade Commission (JFTC) is running a market study on competition concerns in AI, alongside enforcing the new Mobile Software Competition Act (MSCA), aimed at curbing anti-competitive practices in mobile software.

The AI Bill focuses on transparency, international cooperation, and sector-specific guidance rather than heavy penalties. Policymakers hope this flexible framework will avoid stifling innovation while encouraging responsible adoption.

The MSCA, set to be fully enforced in December 2025, obliges mobile platform operators to ensure interoperability and fair treatment of developers, including potential applications to AI tools and assistants.

With rapid AI advances, regulators in Japan remain cautious but proactive. The JFTC aims to monitor markets closely, issue guidelines as needed, and preserve a balance between competition, innovation, and consumer protection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Character.AI and Google face suits over child safety claims

Three lawsuits have been filed in US federal courts alleging that Character.AI and its founders, with Google’s backing, deployed predatory chatbots that harmed children. The cases involve the family of 13-year-old Juliana Peralta, who died by suicide in 2023, and two other minors.

The complaints say the chatbots were designed to mimic humans, build dependency, and expose children to sexual content. Using emojis, typos, and pop-culture personas, the bots allegedly gained trust and encouraged isolation from family and friends.

Juliana’s parents say she engaged in explicit chats, disclosed suicidal thoughts, and received no intervention before her death. Nina, 15, from New York, attempted suicide after her mother blocked the app, while a Colorado, US girl known as T.S. was also affected.

Character.AI and Google are accused of misrepresenting the app as child-safe and failing to act on warning signs. The cases follow earlier lawsuits from the Social Media Victims Law Center over similar claims that the platform encouraged harm.

SMVLC founder Matthew Bergman stated that the cases underscore the urgent need for accountability in AI design and stronger safeguards to protect children. The legal team is seeking damages and stricter safety standards for chatbot platforms marketed to minors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta and Google to block political ads in EU under new regulations

Broadcasters and advertisers seek clarity before the EU’s political advertising rules become fully applicable on 10 October. The European Commission has promised further guidance, but details on what qualifies as political advertising remain vague.

Meta and Google will block the EU’s political, election, and social issue ads when the rules take effect, citing operational challenges and legal uncertainty. The regulation, aimed at curbing disinformation and foreign interference, requires ads to display labels with sponsors, payments, and targeting.

Publishers fear they lack the technical means to comply or block non-compliant programmatic ads, risking legal exposure. They call for clear sponsor identification procedures, standardised declaration formats, and robust verification processes to ensure authenticity.

Advertisers warn that the rules’ broad definition of political actors may be hard to implement. At the same time, broadcasters fear issue-based campaigns – such as environmental awareness drives – could unintentionally fall under the scope of political advertising.

The Dutch parliamentary election on 29 October will be the first to take place under the fully applicable rules, making clarity from Brussels urgent for media and advertisers across the bloc.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Amazon and Mercado Libre criticised for limiting seller mobility in Mexico

Mexico’s competition watchdog has accused Amazon and Mercado Libre of erecting barriers that limit the mobility of sellers in the country’s e-commerce market. The two platforms reportedly account for 85% of the seller market.

The Federal Economic Competition Commission (COFECE) stated that the companies provide preferential treatment to sellers who utilise their logistics services and fail to disclose how featured offers are selected, thereby restricting fair competition.

Despite finding evidence of these practices, COFECE stopped short of imposing corrective measures, citing a lack of consensus among stakeholders. Amazon welcomed the decision, saying it demonstrates the competitiveness of the retail market in Mexico.

The watchdog aims to promote a more dynamic e-commerce sector, benefiting buyers and sellers. Its February report had recommended measures to improve transparency, separate loyalty programme services, and allow fairer access to third-party delivery options.

Trade associations praised COFECE for avoiding sanctions, warning that penalties could harm consumers and shield traditional retailers. Mercado Libre has not yet commented on the findings.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!