The Indian government has issued notices to ride-hailing companies Ola and Uber, launching an investigation into allegations of price discrimination. Concerns have arisen over reports and user complaints suggesting that iPhone users are being charged significantly higher fares for the same rides compared to those using Android devices. This investigation, led by the Central Consumer Protection Agency (CCPA), aims to determine if these price discrepancies are indeed occurring and whether they constitute unfair trade practices.
The government has previously expressed strong opposition to differential pricing, deeming it an unfair and discriminatory practice. India is a crucial market for both Ola and Uber, with intense competition among various ride-hailing services. The outcome of this investigation could have significant implications for the industry, potentially impacting pricing models and consumer trust.
Beyond the ride-hailing sector, the CCPA will also examine potential pricing disparities in other sectors, including food delivery and online ticketing platforms. The broader investigation aims to identify and address any instances where consumers may be facing discriminatory pricing based on factors such as the device they use or other personal characteristics.
Ensuring fair and transparent pricing practices in the digital economy is crucial. As technology continues to shape our daily lives, it is essential to address concerns about potential algorithmic biases and discriminatory practices that may be embedded within digital platforms. The Indian government’s action sends a clear message that such practices will not be tolerated and that consumer protection remains a top priority.
LinkedIn, owned by Microsoft, faces a class-action lawsuit from its Premium customers who allege that the platform improperly shared their private messages with third parties to train AI models. The lawsuit alleges that LinkedIn introduced a new privacy setting last August that allowed users to control the sharing of their data, yet failed to adequately inform them about the use of their messages for AI training.
Customers claim that a stealthy update to LinkedIn’s privacy policy on 18 September outlined this data usage, while also stating that opting out of data sharing would not prevent past training from being utilised.
The plaintiffs, representing millions of Premium users, seek damages for breaches of contract and violations of California’s unfair competition laws. In addition, they demand compensation of $1,000 for each individual affected by alleged violations of the federal Stored Communications Act. The lawsuit highlights concerns over the potential misuse of customer data, asserting that LinkedIn deliberately obscured its practices to evade scrutiny regarding user privacy.
LinkedIn has denied the allegations, stating that the claims lack merit. The legal action arose just hours after President Donald Trump announced a significant AI investment initiative, backed by Microsoft and other major companies. In San Jose, California, the case has been filed as De La Torre v. LinkedIn Corp in the federal district court.
With privacy becoming an increasingly crucial issue, the implications of this lawsuit could resonate throughout the tech industry. Customers are scrutinising platforms’ commitments to safeguarding personal information, especially in the context of rapidly evolving AI technologies.
Lina Khan, a prominent advocate of strong antitrust enforcement, has announced her resignation as chair of the US Federal Trade Commission (FTC) in a memo to staff. Her departure, set to occur in the coming weeks, marks the end of a tenure that challenged numerous corporate mergers and pushed for greater accountability among powerful companies.
During her leadership, Khan spearheaded high-profile lawsuits against Amazon, launched investigations into Microsoft, and blocked major deals, including Kroger’s planned $25 billion acquisition of Albertsons. Her efforts often focused on protecting consumers and workers from potential harms posed by dominant corporations.
Khan, the youngest person to lead the FTC, first gained recognition in 2017 for her work criticising Amazon’s market practices. She argued that tech giants exploited outdated antitrust laws, allowing them to sidestep scrutiny. Her aggressive approach divided opinion, with courts striking down some of her policies, including a proposed ban on noncompete clauses.
Following Khan’s exit, the FTC faces a temporary deadlock with two Republican and two Democratic commissioners. Republican Andrew Ferguson has assumed the role of chair, and a Republican majority is expected once the Senate approves Mark Meador, a pro-enforcement nominee, to complete the five-member commission.
Major tech companies, including Meta’s Facebook, Elon Musk’s X, YouTube, and TikTok, have committed to tackling online hate speech through a revised code of conduct now linked to the European Union’s Digital Services Act (DSA). Announced Monday by the European Commission, the updated agreement also includes platforms like LinkedIn, Instagram, Snapchat, and Twitch, expanding the coalition originally formed in 2016. The move reinforces the EU’s stance against illegal hate speech, both online and offline, according to EU tech commissioner Henna Virkkunen.
Under the revised code, platforms must allow not-for-profit organisations or public entities to monitor how they handle hate speech reports and ensure at least 66% of flagged cases are reviewed within 24 hours. Companies have also pledged to use automated tools to detect and reduce hateful content while disclosing how recommendation algorithms influence the spread of such material.
Additionally, participating platforms will provide detailed, country-specific data on hate speech incidents categorised by factors like race, religion, gender identity, and sexual orientation. Compliance with these measures will play a critical role in regulators’ enforcement of the DSA, a cornerstone of the EU’s strategy to combat illegal and harmful content online.
The United States Federal Trade Commission (FTC) has referred a complaint about Snap Inc’s AI-powered chatbot, My AI, to the Department of Justice (DOJ) for further investigation. The FTC alleges the chatbot caused harm to young users, though specific details about the alleged harm remain undisclosed.
Snap Inc defended its chatbot, asserting that My AI operates under rigorous safety and privacy measures and criticised the FTC for lacking concrete evidence to support its claims. Despite the company’s reassurances, the FTC stated it had uncovered indications of potential legal violations.
The announcement impacted Snap’s stock performance, with shares dropping by 5.2% to close at $11.22 on Thursday. The US FTC noted that publicising the complaint’s transfer to the DOJ was in the public interest, underscoring the gravity of the allegations.
Apple has halted AI-powered notification summaries for news and entertainment apps after backlash over misleading news alerts. A BBC complaint followed a summary that misrepresented an article about a murder case involving UnitedHealthcare’s CEO.
The latest developer previews for iOS 18.3, iPadOS 18.3, and macOS Sequoia 15.3 disable notification summaries for such apps, with Apple planning to reintroduce them after improvements. Notification summaries will now appear in italics to help users distinguish them from standard alerts.
Users will also gain the ability to turn off notification summaries for individual apps directly from the Lock Screen. Apple will notify users in the Settings app that the feature remains in beta and may contain errors.
A public beta is expected next week, but the general release date for iOS 18.3 remains unclear. Apple had already announced plans to clarify that summary texts are generated by Apple Intelligence.
The US Supreme Court will hear a challenge on Wednesday regarding a Texas law that mandates adult websites verify the age of users before granting access to potentially harmful material. The law, which is part of a broader trend across Republican-led states, requires users to submit personal information proving they are at least 18 years old to access pornographic content. The case raises significant First Amendment concerns, as adult entertainment industry groups argue that the law unlawfully restricts free speech and exposes users to risks such as identity theft and data breaches.
The challengers, including the American Civil Liberties Union and the Free Speech Coalition, contend that alternative methods like content-filtering software could better protect minors without infringing on adults’ rights to access non-obscene material. Texas, however, defends the law, citing concerns over the ease with which minors can access explicit content online.
This case is significant because it will test the balance between state efforts to protect minors from explicit content and the constitutional rights of adults to access protected expression. If the Supreme Court upholds the law, it could set a precedent for similar age-verification measures across the US.
Indonesia plans to implement interim guidelines to protect children on social media as it works toward creating a law to establish a minimum age for users, a senior communications ministry official announced on Wednesday. The move follows discussions between Communications Minister Meutya Hafid and President Prabowo Subianto, aiming to address concerns about online safety for children.
The proposed law will mirror recent regulations in Australia, which banned children under 16 from accessing social media platforms like Instagram, Facebook, and TikTok, penalising tech companies that fail to comply. In the meantime, Indonesia will issue regulations requiring platforms to follow child protection guidelines, focusing on shielding children from harmful content while still allowing access to some degree.
Public opinion on the initiative is divided. While parents like Nurmayanti support stricter controls to reduce exposure to harmful material, human rights advocates, including Anis Hidayah, urge caution to ensure children’s access to information is not unduly restricted. A recent survey revealed nearly half of Indonesian children under 12 use the internet, with many accessing social media platforms such as Facebook, Instagram, and TikTok.
This regulatory push reflects Indonesia’s broader efforts to balance digital innovation with safeguarding younger users in its rapidly growing online landscape
Indonesia is preparing to introduce regulations setting a minimum age for social media users, aiming to shield children from potential online risks, according to Communications Minister Meutya Hafid. The announcement follows Australia’s recent ban on social media access for children under 16, which imposes penalties on platforms like Meta’s Facebook and Instagram, as well as TikTok, for non-compliance.
While the specific age limit for Indonesia remains undecided, Minister Hafid stated that President Prabowo Subianto supports the initiative, emphasising the importance of child protection in the digital space. The move highlights concerns about young users’ exposure to inappropriate content and data privacy risks.
Indonesia, with a population of approximately 280 million, has significant internet usage. A recent survey found internet penetration at 79.5%, with nearly half of children under 12 accessing the web, often using platforms like Facebook, Instagram, and TikTok. Among “Gen Z” users aged 12 to 27, internet penetration reached 87%. The proposed regulation reflects growing global efforts to prioritise child safety online.
Ian Russell, father of Molly Russell, has called on the UK government to take stronger action on online safety, warning that delays in regulation are putting children at risk. In a letter to Prime Minister Sir Keir Starmer, he criticised Ofcom’s approach to enforcing the Online Safety Act, describing it as a “disaster.” Russell accused tech firms, including Meta and X, of prioritising profits over safety and moving towards a more dangerous, unregulated online environment.
Campaigners argue that Ofcom’s guidelines contain major loopholes, particularly in addressing harmful content such as live-streamed material that promotes self-harm and suicide. While the government insists that tech companies must act responsibly, the slow progress of new regulations has raised concerns. Ministers acknowledge that additional legislation may be required as AI technology evolves, introducing new risks that could further undermine online safety.
Russell has been a prominent campaigner for stricter online regulations since his daughter’s death in 2017. Despite the Online Safety Act granting Ofcom the power to fine tech firms, critics believe enforcement remains weak. With concerns growing over the effectiveness of current safeguards, pressure is mounting on the government to act decisively and ensure platforms take greater responsibility in protecting children from harmful content.