Amazon and Mercado Libre criticised for limiting seller mobility in Mexico

Mexico’s competition watchdog has accused Amazon and Mercado Libre of erecting barriers that limit the mobility of sellers in the country’s e-commerce market. The two platforms reportedly account for 85% of the seller market.

The Federal Economic Competition Commission (COFECE) stated that the companies provide preferential treatment to sellers who utilise their logistics services and fail to disclose how featured offers are selected, thereby restricting fair competition.

Despite finding evidence of these practices, COFECE stopped short of imposing corrective measures, citing a lack of consensus among stakeholders. Amazon welcomed the decision, saying it demonstrates the competitiveness of the retail market in Mexico.

The watchdog aims to promote a more dynamic e-commerce sector, benefiting buyers and sellers. Its February report had recommended measures to improve transparency, separate loyalty programme services, and allow fairer access to third-party delivery options.

Trade associations praised COFECE for avoiding sanctions, warning that penalties could harm consumers and shield traditional retailers. Mercado Libre has not yet commented on the findings.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act enforcement gears up with 15 authorities named in Ireland

Ireland has designated 15 authorities to monitor compliance with the EU’s AI Act, making it one of the first EU countries fully ready to enforce the new rules. The AI Act regulates AI systems according to their risk to society and began phasing in last year.

Governments had until 2 August to notify the European Commission of their appointed market surveillance authorities. In Ireland, these include the Central Bank, Coimisiún na Meán, the Data Protection Commission, the Competition and Consumer Protection Commission, and the Health and Safety Authority.

The country will also establish a National AI Office as the central coordinator for AI Act enforcement and liaise with EU institutions. A single point of contact must be designated where multiple authorities are involved to ensure clear communication.

Ireland joins Cyprus, Latvia, Lithuania, Luxembourg, Slovenia, and Spain as countries that have appointed their contact points. The Commission has not yet published the complete list of authorities notified by member states.

Former Italian Prime Minister Mario Draghi has called for a pause in the rollout of the AI Act, citing risks and a lack of technical standards. The Commission has launched a consultation as part of its digital simplification package, which will be implemented in December.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan investigates X for non-compliance with the harmful content law

Japanese regulators are reviewing whether the social media platform X fails to comply with new content removal rules.

The law, which took effect in April, requires designated platforms to allow victims of harmful online posts to request deletion without facing unnecessary obstacles.

X currently obliges non-users to register an account before they can file such requests. Officials say that it could represent an excessive burden for victims who violate the law.

The company has also been criticised for not providing clear public guidance on submitting removal requests, prompting questions over its commitment to combating online harassment and defamation.

Other platforms, including YouTube and messaging service Line, have already introduced mechanisms that meet the requirements.

The Ministry of Internal Affairs and Communications has urged all operators to treat non-users like registered users when responding to deletion demands. Still, X and the bulletin board site bakusai.com have yet to comply.

As said, it will continue to assess whether X’s practices breach the law. Experts on a government panel have called for more public information on the process, arguing that awareness could help deter online abuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

West London borough approves AI facial recognition CCTV rollout

Hammersmith and Fulham Council has approved a £3m upgrade to its CCTV system to see facial recognition and AI integrated across the west London borough.

With over 2,000 cameras, the council intends to install live facial recognition technology at crime hotspots and link it with police databases for real-time identification.

Alongside the new cameras, 500 units will be equipped with AI tools to speed up video analysis, track vehicles, and provide retrospective searches. The plans also include the possible use of drones, pending approval from the Civil Aviation Authority.

Council leader Stephen Cowan said the technology will provide more substantial evidence in a criminal justice system he described as broken, arguing it will help secure convictions instead of leaving cases unresolved.

Civil liberties group Big Brother Watch condemned the project as mass surveillance without safeguards, warning of constant identity checks and retrospective monitoring of residents’ movements.

Some locals also voiced concern, saying the cameras address crime after it happens instead of preventing it. Others welcomed the move, believing it would deter offenders and reassure those who feel unsafe on the streets.

The Metropolitan Police currently operates one pilot site in Croydon, with findings expected later in the year, and the council says its rollout depends on continued police cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hollywood studios take legal action against MiniMax for AI copyright infringement

Disney, Warner Bros. Discovery and NBCUniversal have filed a lawsuit in California against Chinese AI company MiniMax, accusing it of large-scale copyright infringement.

The studios allege that MiniMax’s Hailuo AI service generates unauthorised images and videos featuring well-known characters such as Darth Vader, marketing itself as a ‘Hollywood studio in your pocket’ instead of respecting copyright laws.

According to the complaint, MiniMax, reportedly worth $4 billion, ignored cease-and-desist requests and continues to profit from copyrighted works. The studios argue that the company could easily implement safeguards, pointing to existing controls that already block violent or explicit content.

MiniMax’s approach, as they claim, represents a serious threat to both creators and the broader film industry, which contributes hundreds of billions of dollars to the US economy.

Plaintiffs, including Disney’s Marvel and Lucasfilm units, Universal’s DreamWorks Animation and Warner Bros.’ DC Comics, are seeking statutory damages of up to $150,000 per infringed work or unspecified compensation.

They are also asking for an injunction to prevent MiniMax from continuing its alleged violations instead of simply paying damages.

The Motion Picture Association has backed the lawsuit, with its chairman Charles Rivkin warning that unchecked copyright infringement could undermine millions of jobs and the cultural value created by the American film industry.

MiniMax, based in Shanghai, has not responded publicly to the claims but has previously described itself as a global AI foundation model company with over 157 million users worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ghana launches national privacy campaign

Ghana has launched the National Privacy Awareness Campaign, a year-long initiative to strengthen citizens’ privacy rights and build public trust in the country’s expanding digital ecosystem.

Unveiled by Deputy Minister Mohammed Adams Sukparu, the campaign emphasises that data protection is not just a legal requirement but essential to innovation, digital participation, and Ghana’s goal of becoming Africa’s AI hub.

The campaign will run from September 2025 to September 2026 across all 16 regions, using English and key local languages to promote widespread awareness.

The initiative includes the inauguration of the Ghana Association of Privacy Professionals (GAPP) and recognition of new Certified Data Protection Officers, many trained through the One Million Coders Programme.

Officials stressed that effective data governance requires government, private sector, civil society, and media collaboration. The Data Protection Commission reaffirmed its role in protecting privacy while noting ongoing challenges such as limited awareness and skills gaps.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI moves to for-profit with Microsoft deal

Microsoft and OpenAI have agreed to new non-binding terms that will allow OpenAI to restructure into a for-profit company, marking a significant shift in their long-standing partnership.

The agreement sets the stage for OpenAI to raise capital, pursue additional cloud partnerships, and eventually go public, while Microsoft retains access to its technology.

The previous deal gave Microsoft exclusive rights to sell OpenAI tools via Azure and made it the primary provider of compute power. OpenAI has since expanded its options, including a $300 billion cloud deal with Oracle and an agreement with Google, allowing it to develop its own data centre project, Stargate.

OpenAI aims to maintain its nonprofit arm, which will receive more than $100 billion from the projected $500 billion private market valuation.

Regulatory approval from the attorneys general of California and Delaware is required for the new structure, with OpenAI targeting completion by the end of the year to secure key funding.

Both companies continue to compete across AI products, from consumer chatbots to business tools, while Microsoft works on building its own AI models to reduce reliance on OpenAI technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK launches CAF 4.0 for cybersecurity

The UK’s National Cyber Security Centre has released version 4.0 of its Cyber Assessment Framework to help organisations protect essential services from rising cyber threats.

An updated CAF that provides a structured approach for assessing and improving cybersecurity and resilience across critical sectors.

Version 4.0 introduces a deeper focus on attacker methods and motivations to inform risk decisions, ensures software in essential services is developed and maintained securely, and strengthens guidance on threat detection through security monitoring and threat hunting.

AI-related cyber risks are also now covered more thoroughly throughout the framework.

The CAF primarily supports energy, healthcare, transport, digital infrastructure, and government organisations, helping them meet regulatory obligations such as the NIS Regulations.

Developed in consultation with UK cyber regulators, the framework provides clear benchmarks for assessing security outcomes relative to threat levels.

Authorities encourage system owners to adopt CAF 4.0 alongside complementary tools such as Cyber Essentials, the Cyber Resilience Audit, and Cyber Adversary Simulation services. These combined measures enhance confidence and resilience across the nation’s critical infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FTC opens inquiry into AI chatbots and child safety

The US Federal Trade Commission has launched an inquiry into AI chatbots that act as digital companions, raising concerns about their impact on children and teenagers.

Seven firms, including Alphabet, Meta, OpenAI and Snap, have been asked to provide information about how they address risks linked to ΑΙ chatbots designed to mimic human relationships.

Chairman Andrew Ferguson said protecting children online was a top priority, stressing the need to balance safety with maintaining US leadership in AI. Regulators fear minors may be particularly vulnerable to forming emotional bonds with AI chatbots that simulate friendship and empathy.

An inquiry that will investigate how companies develop AI chatbot personalities, monetise user interactions and enforce age restrictions. It will also assess how personal information from conversations is handled and whether privacy laws are being respected.

Other companies receiving orders include Character.AI and Elon Musk’s xAI.

The probe follows growing public concern over the psychological effects of generative AI on young people.

Last month, the parents of a 16-year-old who died by suicide sued OpenAI, alleging ChatGPT provided harmful instructions. The company later pledged corrective measures, admitting its chatbot does not always recommend mental health support during prolonged conversations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated film sparks copyright battle as it heads to Cannes

OpenAI has taken a significant step into entertainment by backing Critterz, the first animated feature film generated with GPT models.

Human artists sketch characters and scenes, while AI transforms them into moving images. The $30 million project, expected to finish in nine months, is far cheaper and faster than traditional animation and could debut at the Cannes Film Festival in 2026.

Yet the film has triggered a fierce copyright debate in India and beyond. Under India’s Copyright Act of 1957, only human works are protected.

Legal experts argue that while AI can be used as a tool when human skill and judgement are clearly applied, autonomously generated outputs may not qualify for copyright at all.

The uncertainty carries significant risks. Producers may struggle to combat piracy or unauthorised remakes, while streaming platforms and investors could hesitate to support projects without clear ownership rights.

A recent case involving an AI tool credited as a co-author of a painting, later revoked, shows how untested the law remains.

Global approaches vary. The US and the EU require human creativity for copyright, while the UK recognises computer-generated works under certain conditions.

In India, lawyers suggest contracts provide the safest path until the law evolves, with detailed agreements on ownership, revenue sharing and disclosure of AI input.

The government has already set up an expert panel to review the Copyright Act, even as AI-driven projects and trailers rapidly gain popularity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!