Appario, a former top seller on Amazon India, has petitioned a court to dismiss an antitrust investigation that concluded Amazon and several sellers breached local competition laws. The Competition Commission of India (CCI) alleges that Amazon, Walmart’s Flipkart, and certain smartphone brands favoured select sellers and prioritised specific listings. These accusations were based on a 2021 Reuters investigation, which exposed Amazon’s internal practices. Despite the findings, Amazon continues to deny any misconduct.
Appario, which has ceased selling on Amazon, is contesting the CCI’s findings in the Karnataka High Court, asserting that the report implicating it should be “set aside.” This legal action marks the first challenge to the CCI’s ongoing investigation, initiated in 2020, and poses a significant obstacle for Amazon in India, one of its most important markets.
The CCI previously conducted raids on Appario and other sellers during its investigation. Court records indicate that Appario is also challenging a CCI order that requires it to submit financial statements following the investigation. Neither Amazon nor Appario has commented on the ongoing legal proceedings.
A political consultant has been fined $7.7 million by the Federal Communications Commission (FCC) for using AI to generate robocalls mimicking President Biden’s voice. The calls, aimed at New Hampshire voters, urged them not to vote in the Democratic primary, sparking significant controversy.
Steven Kramer, the consultant behind the scheme, worked for a challenger to Biden in the primaries. He admitted to paying $500 for the calls to highlight the dangers of AI in political campaigns. Kramer’s actions violated FCC regulations prohibiting misleading caller ID information.
The FCC has given Kramer 30 days to pay the fine, warning that further legal action will follow if he fails to comply. The commission continues to raise concerns over AI’s potential misuse in elections, pushing for stricter regulations to prevent fraud.
The National Communications Commission (NCC) has introduced new regulations to curtail telecom fraud in Taiwan significantly. These measures establish a comprehensive framework to identify users categorised as ‘high-risk’ based on their repeated involvement in fraudulent activities. As a result, these high-risk users will face strict limitations and be permitted to apply for only three telephone numbers across the three major telecom providers within three years. The initiative is designed to deter fraudulent behaviour by restricting access to essential communication services.
Moreover, these regulations align with the recently enacted Fraud Hazard Prevention Act, which provides a foundational legal framework for addressing and mitigating fraud within the telecom sector. The NCC also prioritises collaboration with governmental agencies such as the National Police Agency (NPA) and the National Immigration Agency (NIA). That partnership aims to develop a comprehensive strategy for effectively combating telecom fraud and protecting consumers.
To further this goal, the NCC implements advanced verification systems allowing telecom companies to access NIA and NPA databases. That access will enable them to reauthenticate user identities upon receiving fraud alerts, ensuring that only legitimate users can access telecom services. This proactive approach fosters a safer environment for subscribers and empowers providers to make informed decisions to prevent fraud before it occurs.
In addition to these domestic initiatives, the NCC focuses on the international dimensions of telecom fraud, particularly regarding international roaming services. Under the new regulations, telecom providers must verify that users activating roaming services have entered Taiwan and can present appropriate identification.
That crucial measure aims to curb the misuse of these services for fraudulent purposes. Furthermore, the NCC plans to monitor high-risk offshore telecom operators, assessing their involvement in fraudulent activities and exploring the potential need for mutual legal assistance agreements with their home countries to strengthen enforcement efforts.
Meta, Facebook’s owner, has been fined €91 million ($101.5 million) by the EU’s privacy regulator for mishandling user passwords. The issue, which surfaced five years ago, involved Meta storing certain users’ passwords in plaintext, a format lacking encryption or security protection. Ireland’s Data Protection Commission (DPC), which oversees GDPR compliance for many US tech firms operating in the EU, launched an investigation after Meta reported the incident.
Meta admitted the error, emphasising that third parties had not accessed the exposed passwords. However, storing passwords in an unprotected format is considered a major security flaw, as it exposes users to significant risks if unauthorised individuals access the data. Deputy Commissioner Graham Doyle underscored that storing passwords without encryption is widely unacceptable due to potential abuse.
This fine adds to Meta’s growing list of penalties under the EU’s General Data Protection Regulation (GDPR). To date, Meta has been fined a total of 2.5 billion euros for various data breaches, including a record €1.2 billion fine in 2023, which Meta is currently appealing. These repeated infractions highlight ongoing concerns about how the company handles sensitive user data.
The US Federal Trade Commission (FTC) has cracked down on five companies for deceptive use of AI. Three cases involved businesses falsely claiming to help consumers generate passive income through e-commerce. The FTC also reached settlements with DoNotPay and Rytr, two companies accused of misleading consumers with their AI tools. DoNotPay, which marketed automated legal services, agreed to a $193,000 settlement and will notify customers of the tool’s limitations, while Rytr faced criticism for allowing users to create fake product reviews through its AI writing feature.
FTC Chair Lina M. Khan stressed that AI tools must comply with existing laws, making it clear that deceiving or misleading consumers with AI is illegal. Despite not admitting wrongdoing, both Rytr and DoNotPay settled with the FTC. Rytr agreed to discontinue its review-generating feature, used to create fake product reviews, while DoNotPay accepted a settlement without admitting fault.
The FTC’s actions have sparked internal debate on how to regulate AI. While all five commissioners supported cracking down on false AI claims, the two Republican commissioners raised concerns about the agency’s authority in the Rytr case. This division highlights differing views within the FTC on the scope of its regulatory powers when addressing AI-related issues.
Google has filed a formal complaint with the European Commission over Microsoft’s cloud business practices. The tech giant argues that Microsoft uses its dominant position with Windows Server to stifle competition and lock customers into its Azure platform. Specifically, Google claims Microsoft enforces heavy mark-ups on users of rival cloud services and restricts access to essential security updates.
The dispute follows a recent settlement where Microsoft paid €20 million to resolve concerns raised by European cloud providers. However, the agreement excluded key rivals like Google and Amazon Web Services (AWS), fuelling further criticism. Google insists only regulatory action will halt what it sees as Microsoft’s monopolistic approach, urging the EU to step in and ensure fair competition.
Microsoft denies the accusations, stating they have settled similar issues amicably with other European providers. A Microsoft spokesperson expressed confidence that Google would fail to persuade the European Commission, as it had failed with EU businesses.
Google believes immediate intervention is necessary to prevent the cloud market from becoming increasingly restrictive. They warn that Microsoft’s influence over the European cloud sector, which is growing rapidly, could limit options for customers and hurt competitors.
In a crucial court case, Coinbase, the largest US cryptocurrency exchange, confronted the Securities and Exchange Commission (SEC) in Philadelphia. The exchange is calling on the SEC to create new regulations for digital assets stemming from a lawsuit over the agency’s failure to address a 2022 petition. The petition aimed to clarify when a digital asset is deemed a security and suggested a new regulatory framework specifically designed for the cryptocurrency sector.
The SEC rejected Coinbase’s request in December 2023, asserting that current regulations are adequate for the cryptocurrency sector. Coinbase’s attorney argued that the SEC’s refusal to clarify registration processes has hindered the exchange’s ability to operate within US laws. In contrast, an SEC lawyer maintained that the agency is not obligated to create new rules, suggesting that businesses like Coinbase must adapt to the existing regulatory framework.
This legal dispute highlights an ongoing tension between the cryptocurrency industry and the SEC, which asserts that many crypto tokens qualify as securities and fall under its jurisdiction. The crypto sector largely views itself as existing in a regulatory grey area, pushing for new legislation to provide more precise guidelines for managing digital assets. This ongoing struggle underscores the need for a cohesive framework that addresses the unique challenges of the rapidly evolving crypto market.
As the appeals court considers both sides, the outcome could have significant implications for how cryptocurrencies are regulated in the United States, potentially shaping the industry’s future.
Cloudflare is launching a marketplace that will let websites charge AI companies for scraping their content, aiming to give smaller publishers more control over how AI models use their data. Large AI models scrape thousands of websites to train their systems, often without compensating the content creators, which could threaten the business models of many smaller websites. The marketplace, launching next year, will allow website owners to negotiate deals with AI model providers, charging them based on how often they scrape the site or by setting their terms.
Cloudflare’s launch of AI Audit is a big step for website owners to gain better control over AI bot activity on their sites. Providing detailed analytics on which AI bots access their content empowers site owners to make informed decisions about managing bot traffic. The ability to block specific bots while allowing others can help mitigate issues related to unwanted scraping, which can negatively impact performance and increase operational costs. This tool could be handy for businesses and content creators who rely on their online presence and want to safeguard their resources.
Cloudflare’s CEO, Matthew Prince, believes this marketplace will create a more sustainable system for publishers and AI companies. While some AI firms may resist paying for currently free content, Prince argues that compensating creators is crucial for ensuring the continued production of quality content. The initiative could help balance the relationship between AI companies and content creators, allowing even small publishers to profit from their data in the AI age.
A German court has ruled that Amazon is using Nokia’s patented video technologies without obtaining a proper licence, according to a statement from Nokia. The decision, made by the Munich Regional Court, found that Amazon’s streaming devices are illegally utilising Nokia’s patented video-related technologies, which the Finnish company holds rights to.
Nokia’s Chief Licensing Officer, Arvin Patel, expressed satisfaction with the ruling, stating that Amazon has been selling these devices without the necessary licences in place. The ruling highlights ongoing disputes between tech giants over intellectual property.
In response to Nokia’s legal actions, Amazon filed a lawsuit in July in a Delaware federal court, accusing the company from Finland of infringing on a dozen Amazon patents related to cloud-computing technology.
This legal battle is part of a broader pattern of disputes between major tech companies, as patent rights continue to play a critical role in the development of new technologies and services.
A recent report from the US Federal Trade Commission (FTC) has criticised social media platforms for lacking transparency in how they manage user data. Companies such as Meta, TikTok, and Twitch have been highlighted for inadequate data retention policies, raising significant privacy concerns.
Social platforms collect large amounts of data using tracking technologies and by purchasing information from data brokers, often without users’ knowledge. Much of this data fuels the development of AI, with little control given to users. Data privacy for teenagers remains a pressing issue, leading to recent legislative moves in Congress.
Some companies, including X (formerly Twitter), responded by saying that they have improved their data practices since 2020. Others failed to comment. Advertising industry groups defended data collection, claiming it supports free access to online services.
FTC officials are concerned about the risks posed to individuals, especially those not even using the platforms, due to widespread data collection. Inadequate data management by social platforms may expose users to privacy breaches and identity theft.