EU demands transparency from Temu and Shein

The European Union has directed Chinese fast-fashion e-commerce giants Temu and Shein to disclose their compliance with EU online content regulations by July 12. This move follows complaints lodged by consumer groups and designates both platforms as Very Large Online Platforms under the Digital Services Act. These designations impose stricter obligations on handling illegal and harmful content.

According to the European Commission, requests for information have been issued to Temu and Shein regarding their measures to combat illegal products, prevent user deception through manipulative interfaces, and safeguard minors. The Commission also seeks transparency in their recommendation systems, traceability of sellers, and compliance integration into platform design.

The enforcement action stems from consumer organisations’ complaints and underscores the EU’s commitment to ensuring digital platforms uphold regulatory standards. Failure to comply with the Digital Services Act could lead to fines of up to 6% of a company’s global turnover, emphasising the seriousness with which the EU views adherence to online content rules.

Temu and Shein are mandated to furnish comprehensive responses by the specified deadline, marking a pivotal moment in how global e-commerce giants navigate regulatory landscapes beyond their home markets. The outcome of these disclosures will be closely monitored as the EU continues to assert its regulatory authority over digital platforms operating within its jurisdiction.

Experts join Regulating AI’s new advisory board

Regulating AI, a non-profit organisation dedicated to promoting AI governance, has announced its advisory board’s formation. Board members include notable figures such as former US Senator Cory Gardner, former Bolivian President Jorge Quiroga, and former Finnish Prime Minister Esko Aho. The board aims to foster a sustainable AI ecosystem that benefits humanity while addressing potential risks and ethical concerns.

The founder of Regulating AI, Sanjay Puri, expressed his excitement about the diverse expertise and perspectives the new board members bring. He emphasised the importance of their wisdom in navigating the complexities of the rapidly evolving AI landscape and shaping policies that balance innovation with ethical considerations and societal well-being.

One of the organisation’s key initiatives is developing a comprehensive AI governance framework. That includes promoting international cooperation, advocating for diverse voices, and exploring sector-specific AI implications. Former President of Bolivia Jorge Quiroga highlighted the transformational power of AI and the need for effective regulation that considers the unique challenges of developing nations.

Regulating AI aims to build public trust, align international standards, and empower various stakeholders through its board. Former US Senator Gardner underscored the necessity of robust regulatory frameworks to ensure AI is developed and deployed responsibly, protecting consumer privacy, preventing algorithmic bias, and upholding democratic values. The organisation also seeks to educate and raise awareness about AI regulations, fostering discussions among experts and policymakers to advance understanding and implementation.

Italian watchdog tests AI for market oversight

Italy’s financial watchdog, Consob, has begun experimenting with AI to enhance its oversight capabilities, particularly in the initial review of listing prospectuses and the detection of insider trading. According to Consob, these AI algorithms aim to swiftly identify potential instances of insider trading, which traditionally requires significantly more time when conducted manually.

The agency reported that its AI algorithms can detect errors in just three seconds, a task typically taking a human analyst at least 20 minutes. These efforts were part of testing conducted last year using prototypes developed in collaboration with Scuola Normale Superiore University in Pisa, alongside an additional model developed independently.

Consob views the integration of AI as pivotal in enhancing the effectiveness of regulatory controls to detect financial misconduct. The next phase involves transitioning from prototype testing to fully incorporating AI into Consob’s regular operational procedures. That initiative mirrors similar efforts by financial regulators globally who are increasingly leveraging AI to bolster consumer protection and regulatory oversight.

For instance, in the United Kingdom, the Financial Conduct Authority (FCA) has utilised AI technologies to combat online scams and protect consumers. That trend underscores a broader international movement within regulatory bodies to harness AI’s potential in safeguarding market integrity and enhancing regulatory efficiency.

EU charges Microsoft over Teams bundling

EU antitrust regulators have accused Microsoft of illegally bundling its Teams chat and video app with its Office product suite, claiming the company’s recent efforts to separate the two were insufficient. The European Commission stated that Microsoft breached antitrust rules by tying Teams to its popular Office 365 and Microsoft 365 suites, which stifled competition.

The regulatory action follows a 2020 complaint by Slack, a rival workspace messaging app owned by Salesforce. Microsoft introduced Teams to Office 365 in 2017 at no extra cost, replacing Skype for Business, and its use surged during the pandemic due to its video conferencing capabilities.

The European Commission has preliminarily determined that Microsoft’s changes don’t adequately address the competition concerns and that more actions are needed. Microsoft has expressed willingness to work with the EU regulators to find acceptable solutions.

PayPal appoints new CTO amid recent AI services launch

PayPal has appointed Srini Venkatesan as its new Chief Technology Officer (CTO) to lead its artificial intelligence initiatives. Venkatesan will be in charge of areas such as AI and machine learning, information security, and product engineering. In his previous position at Walmart, Venkatesan developed platforms to support the retail giant, including aspects of the Walmart+ subscription service. He has also worked at Yahoo and eBay among others.

Why is this important?

Like others in the finance field, PayPal has sought to embrace AI to improve its services. In January, the company announced new AI-driven tools, including some to make payment checkouts smoother. However, other tools use buying history, instead of browsing history, to target clients.

‘Smart Receipts’ uses buying history to recommend products, cashback and other deals on receipts. Similarly, ‘Advanced Offers Platform’ uses AI to deliver targeted promotions based on the customer’s purchase history with any previous merchant. PayPal says it is shifting from general ads to personalised ‘offers’, improving the customer experience.

In an article in PayPal’s newsroom, the company said it is adding simple privacy controls to let customers choose whether to share their data with merchants for personalised offers. However, given that browsing targeted advertising has already caused privacy concerns, it is likely that buying history will do so too. Venkatesan will be expected to implement the tech and answer to these concerns in his new role.

US DoJ to file lawsuit against TikTok for alleged children’s privacy violations

TikTok will be sued again by the US Department of Justice (DoJ) in a consumer protection lawsuit against ByteDance’s TikTok later this year, focusing on alleged children’s privacy violations. The incentive for the legal move comes on behalf of the Federal Trade Commission (FTC), but the DoJ will not pursue allegations that TikTok misled US consumers about data security, specifically dropping claims that the company failed to inform users that China-based employees could access their personal and financial information.

The decision suggests that the primary focus will now be on how TikTok handles children’s privacy. The FTC had referred to the DoJ a complaint against TikTok and its parent, ByteDance, concerning potential violations of children’s privacy, stating that it investigated TikTok and found evidence suggesting they may be breaking the Children’s Online Privacy Protection Act. The federal act requires apps and websites aimed at kids to get parental consent before collecting personal information from children under 13.

Simultaneously, TikTok and ByteDance are challenging a US law that aims to ban the popular short video app in the United States starting from 19 January next year.

Meta to face US lawsuit by Australian billionaire over scam crypto ads on Facebook

A US judge has denied Meta Platforms’ attempt to dismiss a lawsuit filed by Australian billionaire Andrew Forrest. The lawsuit accuses Meta of negligence for allowing scam advertisements featuring Forrest’s likeness, promoting fake cryptocurrency and fraudulent investments, to appear on Facebook. Judge Casey Pitts ruled that Forrest could proceed with claims that Meta’s actions breached its duty to operate responsibly and that Meta misappropriated Forrest’s name and likeness for profit.

Meta had argued that it was protected under Section 230 of the Communications Decency Act, which typically shields online platforms from liability for third-party content. However, the judge determined that Forrest’s allegations raised questions about whether Meta’s advertising tools actively contributed to the misleading content rather than simply hosting it neutrally.

Forrest alleges that over 1,000 fraudulent ads featuring him appeared on Facebook in Australia from April to November 2023, resulting in millions of dollars in losses for victims. The lawsuit marks a significant step, challenging the usual immunity social media companies claim under Section 230 for their advertising practices. Forrest is seeking compensatory and punitive damages from Meta.

The following decision follows Australian prosecutors’ refusal to pursue criminal charges against Meta over similar scam ads. Forrest, the executive chairman of Fortescue Metals Group, considers the judge’s ruling a strategic victory in holding social media companies accountable for fraudulent advertising.

FCC names Royal Tiger as first official AI robocall scammer gang

The US Federal Communications Commission (FCC) has identified Royal Tiger as the first official AI robocall scammer gang, marking a milestone in efforts to combat sophisticated cyber fraud. Royal Tiger has used advanced techniques like AI voice cloning to impersonate government agencies and financial institutions, deceiving millions of Americans through robocall scams.

These scams involve automated systems that mimic legitimate entities to trick individuals into divulging sensitive information or making fraudulent payments. Despite the FCC’s actions, experts warn that AI-driven scams will likely increase, posing significant challenges in protecting consumers from evolving tactics such as caller ID spoofing and persuasive social engineering.

While the FCC’s move aims to raise awareness and disrupt criminal operations, individuals are urged to remain vigilant. Tips include scepticism towards unsolicited calls, utilisation of call-blocking services, and verification of caller identities by contacting official numbers directly. Avoiding sharing personal information over the phone without confirmation of legitimacy is crucial to mitigating the risks posed by these scams.

Why does it matter?

As technology continues to evolve, coordinated efforts between regulators, companies, and the public are essential in staying ahead of AI-enabled fraud and ensuring robust consumer protection measures are in place. Vigilance and proactive reporting of suspicious activities remain key in safeguarding against the growing threat of AI-driven scams.

Meta halts AI launch in Europe after EU regulator ruling

Meta’s main EU regulator, the Irish Data Protection Commission (DPC), requested that the company delay the training of its large language models (LLMs) on content published publicly by adults on the company’s platforms. In response, Meta announced they would not be launching their AI in Europe for the time being. 

The main reason behind the request is Meta’s plan to use this data to train its AI models without explicitly seeking consent. The company claims it must do so or else its AI ‘won’t accurately understand important regional languages, cultures or trending topics on social media.’ It is already developing continent-specific AI technology. Another cause for concern is Meta’s use of information belonging to people who do not use its services. In a message to its Facebook users, it said that it may process information about non-users if they appear in an image or are mentioned on their platforms. 

The DPC welcomed Meta’s decision to delay its implementation. The commission is leading the regulation of Meta’s AI tools on behalf of EU data protection authorities (DPAs), 11 of which received complaints by advocacy group NOYB (None Of Your Business). NOYB argues that the GDPR is flexible enough to accommodate this AI, as long as it asks for the user’s consent. The delay comes right before Meta’s new privacy policy comes into force on 26 June. 

Beyond the EU, the executive director of the UK’s Information Commissioner’s Office was pleased with the delay, and added that ‘in order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset.’

European groups urge fairness in EU cybersecurity label for Big Tech

A proposed cybersecurity certification scheme (EUCS) for cloud services has raised concerns among 26 industry groups across Europe, who caution against potential discrimination towards major US tech firms like Amazon, Alphabet’s Google, and Microsoft. The European Commission, EU cybersecurity agency ENISA, and EU countries are set to discuss the scheme, which has seen multiple revisions since its draft release in 2020. The EUCS aims to help governments and businesses select secure and reliable cloud vendors, a critical consideration in the rapidly growing global cloud computing industry.

The latest version of the scheme, updated in March, removed stringent sovereignty requirements that would have forced US tech giants to form joint ventures or collaborate with EU-based companies to handle data within the bloc, a criterion for earning the highest EU cybersecurity label. In a joint letter, the industry groups argued for a non-discriminatory EUCS that fosters the free movement of cloud services across Europe, aligning with industry best practices and supporting Europe’s digital goals and security resilience.

The signatories, which include various chambers of commerce and industry associations from several European countries, emphasised the importance of diverse and resilient cloud technologies for their members to compete globally. They welcomed the removal of ownership controls and specific data protection requirements, arguing that these changes would ensure cloud security improvements without discriminating against non-EU companies.

EU cloud vendors like Deutsche Telekom, Orange, and Airbus have advocated for sovereignty requirements, fearing non-EU government access to European data under foreign laws. However, the industry groups contend that the inclusive approach of the revised EUCS will better serve Europe’s digital and security needs while promoting a competitive market environment.