The Indian government has issued notices to ride-hailing companies Ola and Uber, launching an investigation into allegations of price discrimination. Concerns have arisen over reports and user complaints suggesting that iPhone users are being charged significantly higher fares for the same rides compared to those using Android devices. This investigation, led by the Central Consumer Protection Agency (CCPA), aims to determine if these price discrepancies are indeed occurring and whether they constitute unfair trade practices.
The government has previously expressed strong opposition to differential pricing, deeming it an unfair and discriminatory practice. India is a crucial market for both Ola and Uber, with intense competition among various ride-hailing services. The outcome of this investigation could have significant implications for the industry, potentially impacting pricing models and consumer trust.
Beyond the ride-hailing sector, the CCPA will also examine potential pricing disparities in other sectors, including food delivery and online ticketing platforms. The broader investigation aims to identify and address any instances where consumers may be facing discriminatory pricing based on factors such as the device they use or other personal characteristics.
Ensuring fair and transparent pricing practices in the digital economy is crucial. As technology continues to shape our daily lives, it is essential to address concerns about potential algorithmic biases and discriminatory practices that may be embedded within digital platforms. The Indian government’s action sends a clear message that such practices will not be tolerated and that consumer protection remains a top priority.
Social media buzzed over the weekend as ChatGPT, the popular AI chatbot, mysteriously refused to generate the name ‘David Mayer.’ Users reported responses halting mid-sentence or error messages when attempting to input the name, sparking widespread speculation about Mayer’s identity and theories that he might have requested privacy through legal means.
OpenAI, the chatbot’s developer, attributed the issue to a system glitch. A spokesperson clarified, ‘One of our tools mistakenly flagged this name, which shouldn’t have happened. We’re working on a fix.’ The company has since resolved the glitch for ‘David Mayer,’ but other names continue to trigger errors.
Conspiracy theories emerged online, with some suggesting a link to David Mayer de Rothschild, who denied involvement, and others speculating connections to a deceased academic with ties to a security list. Experts noted the potential relevance of GDPR’s ‘right to be forgotten’ privacy rules, which allow individuals to request the removal of their data from digital platforms.
However, privacy specialists highlighted AI systems’ challenges in fully erasing personal data due to their reliance on massive datasets from public sources. While the incident has drawn attention to the complexities of AI data handling and privacy compliance, OpenAI remains tight-lipped on whether the glitch stemmed from a deletion request under GDPR guidelines. The situation underscores the tension between advancing AI capabilities and safeguarding individual privacy.
A federal judge has ruled that New York City’s law requiring food delivery companies to share customer data with restaurants is unconstitutional. The decision, handed down by US District Judge Analisa Torres, found the law violated the First Amendment by regulating commercial speech inappropriately.
The law, introduced in 2021 to support local restaurants recovering from the COVID-19 pandemic, required delivery platforms like DoorDash and UberEats to share customer details. Delivery companies in US argued that the law threatened both customer privacy and their business by allowing restaurants to use the data for their own marketing purposes.
Judge Torres stated that New York City failed to prove the law was necessary and suggested alternative methods to support restaurants, such as letting customers opt-in to share their data or providing financial incentives. City officials are reviewing the ruling, while delivery companies hailed it as a victory for data protection.
The New York City Hospitality Alliance expressed disappointment, claiming the ruling hurts small businesses and calling for the city to appeal the decision.
Starting this week, Meta users in Brazil will receive email and social media notifications, providing details on how their data might be used for AI development. Users will also have the option to opt out of this data usage. The ANPD had initially halted Meta’s privacy policy in July, but it lifted the suspension last Friday after Meta agreed to make these disclosures.
In response to the ANPD’s concerns, Meta had also temporarily suspended using generative AI tools in Brazil, including popular AI-generated stickers on WhatsApp, a platform with a significant user base. This suspension was enacted while Meta engaged in discussions with the ANPD to address the agency’s concerns.
Despite the ANPD lifting the suspension, Meta has yet to confirm whether it will immediately reinstate the AI tools in Brazil. When asked, the company reiterated that the suspension was initially a measure taken during ongoing talks with the data protection authority.
The development marks an important step in Brazil’s efforts to ensure transparency and user control over personal data in the age of AI.
California’s efforts to regulate the use of digital replicas of performers took a significant step forward with the passage of AB 1836 in the state Senate. The new bill mandates that studios obtain explicit consent from the estates of deceased performers before creating digital replicas for use in films, TV shows, video games, and other media. The move comes just days after the California legislature passed AB 2602, which enforces similar consent requirements for living actors.
SAG-AFTRA Statement on Today's Passing of CA Assembly Bill 1836: "For those who would use the digital replicas of deceased performers in films, TV shows, videogames, audiobooks, sound recordings and more, without first getting the consent of those performers’ estates, … 1/3
SAG-AFTRA, the union representing film and television performers, has strongly advocated for these measures, emphasising the importance of protecting performers’ rights in the digital age. In a statement released after the Senate’s approval of AB 1836, the union described the bill as a ‘legislative priority’ and urged Governor Gavin Newsom to sign it into law. The union’s stance highlights the growing concern over the unauthorised use of digital replicas, particularly as technology makes it increasingly easy to recreate performers’ likenesses long after they have passed away, keeping the audience concerned and aware of the issue.
If signed into law, AB 1836 would ensure that the estates of deceased performers have control over how their likenesses are used, potentially setting a precedent for other states to follow. However, the bill also raises practical challenges, such as determining who has the authority to grant consent on behalf of the deceased, which could complicate its implementation. The bill reflects a broader push within the entertainment industry to establish clear legal protections against exploiting living and deceased performers in the rapidly evolving digital landscape.
Alongside the AI bill, the passing of bill AB 1836 underscores California’s role as a leader in entertainment industry legislation, particularly in areas where technology intersects with performers’ rights. As the debate over digital replicas continues, the potential impact of AB 1836 on the industry could have far-reaching implications, keeping the audience engaged and interested in the future of entertainment law.
The Delhi High Court has directed Google and Microsoft to file a review petition seeking the recall of a previous order mandating search engines to promptly restrict access to non-consensual intimate images (NCII) without necessitating victims to provide specific URLs repeatedly. Both tech giants argued the technological infeasibility of identifying and proactively taking down NCII images, even with the assistance of AI tools.
The court’s order stems from a 2023 ruling requiring search engines to remove NCII within 24 hours, as per the IT Rules, 2021, or risk losing their safe harbour protections under Section 79 of the IT Act, 2000. It proposed issuing a unique token upon initial takedown, with search engines responsible for turning off any resurfaced content using pre-existing technology to alleviate the burden on victims of tracking and repeatedly reporting specific URLs. Moreover, the court suggested leveraging hash-matching technology and developing a ‘trusted third-party encrypted platform’ for victims to register NCII content or URLs, shifting the responsibility of identifying and removing resurfaced content away from victims and onto the platform while ensuring utmost transparency and accountability standards.
However, Google expressed concerns regarding automated tools’ inability to discern consent in shared sexual content, potentially leading to unintended takedowns and infringing on free speech, echoing Microsoft’s apprehension about the implications of proactive monitoring on privacy and freedom of expression.
At the request of the German Federal Court of Justice, the Court of Justice of the European Union (CJEU) has held that search engine operators shall dereference content that the user shows to be manifestly inaccurate, in the exercise of their right to be forgotten. In the case at hand, two managers of a group of investment companies filed a request with Google asking for dereference of results of searches made with their names that reveal articles containing inaccurate claims about the group. Also, they requested the removal of their photos from the list of results of an image search made on the basis of their names. The burden of proof is on requesting users to provide evidence capable of establishing the inaccuracy of the information. Such evidence does not need to stem from a judicial decision proving the inaccuracy. In regard to the display of photos, the CJEU stated that the search engine operators must conduct a separate balancing of competing rights and that the informative value of photos should be taken into account without taking into consideration the context of their publication on the internet page from which they are taken.